InfluxDB

Open source database specialized for time-series data. Optimized for IoT metrics, application monitoring, and real-time analytics. High-speed write and query performance.

Database ServerTime Series DatabaseDistributed SystemReal-time AnalyticsHigh PerformanceMonitoringIoTMetrics AnalysisGo Implementation

Database Server

InfluxDB

Overview

InfluxDB is a high-performance time series database specialized for metrics, events, and real-time analytics. Developed by InfluxData in 2013 and implemented in Go, it serves as a modern time series data management platform widely adopted in IoT, DevOps, application monitoring, and real-time analytics domains. Through intuitive data manipulation using SQL-like query languages (InfluxQL and Flux), automatic data compression and downsampling, and distributed clustering capabilities, it achieves high-speed data ingestion of millions of points per second and efficient storage management. Optimized design for storing and querying timestamped data enables high-speed processing of large-scale time series data that was difficult with traditional RDBMS, functioning as a critical infrastructure component in modern data-driven organizations.

Details

InfluxDB 2025 edition has significantly evolved as a third-generation architecture based on Apache Arrow, Apache Parquet, and Apache DataFusion technologies with the introduction of InfluxDB 3.0 Core. The latest version achieves native object storage (S3, GCS, Azure) support, in-memory query engine acceleration, complete integration of SQL and InfluxQL, and unlimited scalability through cloud-native design. InfluxDB Cloud offers three deployment options - InfluxDB Serverless, InfluxDB Dedicated, and InfluxDB Clustered - addressing a wide range of needs from startups to large enterprises. Python processing engine integration enables direct execution of custom analytics and machine learning pipelines on time series data, significantly improving productivity for data scientists and engineers. With comprehensive ETL and alerting capabilities through Telegraf agents and Kapacitor, and a rich ecosystem, it provides end-to-end time series data management solutions.

Key Features

  • Time Series Optimization: Ultra-fast time series data processing through timestamp indexing and compression technologies
  • High-Performance Ingestion: Parallel data ingestion performance of millions of points per second
  • Cloud Native: Modern architecture based on object storage
  • SQL Support: Flexible query capabilities through standard SQL and InfluxQL
  • Automatic Downsampling: Automatic data generation and compression through data retention policies
  • Distributed Architecture: Unlimited scalability through horizontal scaling

Pros and Cons

Pros

  • World-class ingestion and query performance specialized for time series data
  • Efficient storage operations through automatic data compression and retention policies
  • Minimal learning costs for existing database engineers through SQL standard support
  • Comprehensive data collection with 200+ data source support through Telegraf agents
  • Zero operational overhead option through fully managed SaaS on InfluxDB Cloud
  • High memory efficiency and cross-platform support through Go language implementation

Cons

  • Not suitable for general-purpose workloads other than time series data
  • Limitations on complex JOIN operations and normalized relational data structures
  • Migration costs from existing environments due to some feature changes in latest InfluxDB 3.0
  • Increased licensing costs and infrastructure costs in high-availability configurations
  • Learning curve required for Flux language in advanced data transformation processing
  • Some consistency constraints due to performance-prioritized design over ACID properties

Reference Pages

Code Examples

Installation and Basic Setup

# InfluxDB environment setup using Docker Compose
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  influxdb:
    image: influxdb:2.7-alpine
    container_name: influxdb
    restart: unless-stopped
    ports:
      - "8086:8086"
    volumes:
      - influxdb_data:/var/lib/influxdb2
      - influxdb_config:/etc/influxdb2
    environment:
      - DOCKER_INFLUXDB_INIT_MODE=setup
      - DOCKER_INFLUXDB_INIT_USERNAME=admin
      - DOCKER_INFLUXDB_INIT_PASSWORD=adminpassword
      - DOCKER_INFLUXDB_INIT_ORG=myorg
      - DOCKER_INFLUXDB_INIT_BUCKET=mybucket
      - DOCKER_INFLUXDB_INIT_RETENTION=30d
      - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=mytoken123456789
    
  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=grafanaadmin
    depends_on:
      - influxdb

volumes:
  influxdb_data:
  influxdb_config:
  grafana_data:
EOF

# Start services
docker-compose up -d

# Verify operation
curl http://localhost:8086/health

# Native installation on Linux environment
# Add InfluxData official repository
wget -q https://repos.influxdata.com/influxdata-archive_compat.key
echo '393e8779c89ac8d958f81f942f9ad7fb82a25e133faddaf92e15b16e6ac9ce4c influxdata-archive_compat.key' | sha256sum -c && cat influxdata-archive_compat.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg > /dev/null
echo 'deb [signed-by=/etc/apt/trusted.gpg.d/influxdata-archive_compat.gpg] https://repos.influxdata.com/debian stable main' | sudo tee /etc/apt/sources.list.d/influxdata.list

# Install InfluxDB
sudo apt update
sudo apt install influxdb2

# Start service
sudo systemctl enable influxdb
sudo systemctl start influxdb
sudo systemctl status influxdb

# Initial setup (Web-based)
# Access http://localhost:8086

# CLI initial setup
influx setup \
  --org myorg \
  --bucket mybucket \
  --username admin \
  --password adminpassword \
  --retention 30d \
  --force

# Verify configuration
influx config list

Basic Data Writing and Reading

# Data writing using Line Protocol
curl -X POST "http://localhost:8086/api/v2/write?org=myorg&bucket=mybucket" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: text/plain" \
  --data-raw '
temperature,host=server01,region=tokyo value=25.6 1642680000000000000
temperature,host=server02,region=osaka value=22.1 1642680000000000000
humidity,host=server01,region=tokyo value=65.2 1642680000000000000
humidity,host=server02,region=osaka value=58.7 1642680000000000000
cpu_usage,host=server01,region=tokyo,cpu=cpu0 value=85.2 1642680000000000000
cpu_usage,host=server01,region=tokyo,cpu=cpu1 value=72.4 1642680000000000000
memory_usage,host=server01,region=tokyo value=76.8,available=7680000000 1642680000000000000
disk_usage,host=server01,region=tokyo,device=/dev/sda1 value=45.2,free=550000000000 1642680000000000000
'

# Bulk writing of multiple points
cat > sample_data.txt << 'EOF'
temperature,host=server01,region=tokyo value=26.1 1642683600000000000
temperature,host=server01,region=tokyo value=25.8 1642687200000000000
temperature,host=server01,region=tokyo value=24.9 1642690800000000000
temperature,host=server02,region=osaka value=23.2 1642683600000000000
temperature,host=server02,region=osaka value=22.8 1642687200000000000
temperature,host=server02,region=osaka value=21.5 1642690800000000000
EOF

curl -X POST "http://localhost:8086/api/v2/write?org=myorg&bucket=mybucket" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: text/plain" \
  --data-binary @sample_data.txt

# Data reading using Flux query
curl -X POST "http://localhost:8086/api/v2/query?org=myorg" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: application/vnd.flux" \
  --data 'from(bucket: "mybucket")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> filter(fn: (r) => r._field == "value")
  |> sort(columns: ["_time"], desc: false)'

# Query using InfluxDB CLI
influx query '
from(bucket: "mybucket")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "cpu_usage")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host"])
  |> mean()' \
  --org myorg

# SQL query execution (InfluxDB 3.0)
influx query --format=table '
SELECT 
  time, 
  host, 
  region, 
  AVG(value) as avg_temp
FROM temperature 
WHERE time >= now() - INTERVAL '1 hour'
GROUP BY time(10m), host, region
ORDER BY time DESC' \
  --org myorg

Advanced Queries and Data Transformation

# Advanced data analysis using Flux
curl -X POST "http://localhost:8086/api/v2/query?org=myorg" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: application/vnd.flux" \
  --data '
// Temperature data statistical analysis
temperature_stats = from(bucket: "mybucket")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host", "region"])
  |> aggregateWindow(every: 1h, fn: mean, createEmpty: false)
  |> yield(name: "hourly_avg")

// Anomaly detection
temperature_anomalies = from(bucket: "mybucket")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host"])
  |> aggregateWindow(every: 10m, fn: mean)
  |> map(fn: (r) => ({
      r with
      anomaly: if r._value > 30.0 or r._value < 10.0 then "high" else "normal"
    }))
  |> filter(fn: (r) => r.anomaly == "high")
  |> yield(name: "anomalies")

// Multi-metrics combined analysis
combined_metrics = from(bucket: "mybucket")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "cpu_usage" or r._measurement == "memory_usage")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host", "_measurement"])
  |> aggregateWindow(every: 5m, fn: mean)
  |> pivot(rowKey: ["_time", "host"], columnKey: ["_measurement"], valueColumn: "_value")
  |> map(fn: (r) => ({
      r with
      performance_score: (100.0 - r.cpu_usage) * (100.0 - r.memory_usage) / 100.0
    }))
  |> yield(name: "performance")
'

# Time series prediction and trend analysis
curl -X POST "http://localhost:8086/api/v2/query?org=myorg" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: application/vnd.flux" \
  --data '
// Rolling average and trend calculation
from(bucket: "mybucket")
  |> range(start: -7d)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host"])
  |> aggregateWindow(every: 1h, fn: mean)
  |> movingAverage(n: 24)  // 24-hour moving average
  |> derivative(unit: 1h, nonNegative: false)
  |> map(fn: (r) => ({
      r with
      trend: if r._value > 0.1 then "increasing" 
             else if r._value < -0.1 then "decreasing" 
             else "stable"
    }))
'

# Data downsampling and aggregation
curl -X POST "http://localhost:8086/api/v2/query?org=myorg" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: application/vnd.flux" \
  --data '
// Time-based statistical summary
from(bucket: "mybucket")
  |> range(start: -30d)
  |> filter(fn: (r) => r._measurement == "cpu_usage")
  |> filter(fn: (r) => r._field == "value")
  |> group(columns: ["host"])
  |> aggregateWindow(
      every: 1d,
      fn: (column, tables=<-) => tables
        |> max(column: "_value")
        |> set(key: "_field", value: "daily_max")
        |> union(tables: tables |> min(column: "_value") |> set(key: "_field", value: "daily_min"))
        |> union(tables: tables |> mean(column: "_value") |> set(key: "_field", value: "daily_avg"))
        |> union(tables: tables |> stddev(column: "_value") |> set(key: "_field", value: "daily_stddev"))
    )
  |> sort(columns: ["_time", "_field"])
'

Data Collection Configuration with Telegraf

# Install Telegraf
sudo apt install telegraf

# Create Telegraf configuration file
sudo tee /etc/telegraf/telegraf.conf << 'EOF'
# Telegraf Configuration

# Global tags
[global_tags]
  environment = "production"
  datacenter = "tokyo"

# Agent configuration
[agent]
  interval = "10s"
  round_interval = true
  metric_batch_size = 1000
  metric_buffer_limit = 10000
  collection_jitter = "0s"
  flush_interval = "10s"
  flush_jitter = "0s"
  precision = ""
  hostname = ""
  omit_hostname = false

# Output plugins
[[outputs.influxdb_v2]]
  urls = ["http://localhost:8086"]
  token = "mytoken123456789"
  organization = "myorg"
  bucket = "mybucket"
  
# Input plugins
[[inputs.cpu]]
  percpu = true
  totalcpu = true
  collect_cpu_time = false
  report_active = false

[[inputs.disk]]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

[[inputs.diskio]]

[[inputs.kernel]]

[[inputs.mem]]

[[inputs.processes]]

[[inputs.swap]]

[[inputs.system]]

[[inputs.net]]

[[inputs.netstat]]

# Docker metrics
[[inputs.docker]]
  endpoint = "unix:///var/run/docker.sock"
  gather_services = false
  container_names = []
  source_tag = false
  container_name_include = []
  container_name_exclude = []
  timeout = "5s"
  perdevice = true
  total = false
  docker_label_include = []
  docker_label_exclude = []

# HTTP response monitoring
[[inputs.http_response]]
  urls = [
    "http://localhost:8086/health",
    "https://example.com",
    "https://api.github.com"
  ]
  response_timeout = "5s"
  method = "GET"
  follow_redirects = false

# PostgreSQL monitoring
[[inputs.postgresql]]
  address = "host=localhost user=postgres sslmode=disable"
  databases = ["postgres"]

# Nginx monitoring
[[inputs.nginx]]
  urls = ["http://localhost/nginx_status"]
  response_timeout = "5s"
EOF

# Start Telegraf service
sudo systemctl enable telegraf
sudo systemctl start telegraf
sudo systemctl status telegraf

# Test Telegraf configuration
telegraf --config /etc/telegraf/telegraf.conf --test

# Custom metrics collection script
cat > custom_metrics.sh << 'EOF'
#!/bin/bash

# Custom application metrics collection
while true; do
  # Application-specific metrics
  app_connections=$(netstat -an | grep :8080 | grep ESTABLISHED | wc -l)
  app_memory=$(ps aux | grep myapp | awk '{sum+=$6} END {print sum}')
  app_cpu=$(ps aux | grep myapp | awk '{sum+=$3} END {print sum}')
  
  # Send data to InfluxDB
  curl -X POST "http://localhost:8086/api/v2/write?org=myorg&bucket=mybucket" \
    -H "Authorization: Token mytoken123456789" \
    -H "Content-Type: text/plain" \
    --data "myapp,host=$HOSTNAME connections=$app_connections,memory=$app_memory,cpu=$app_cpu $(date +%s)000000000"
  
  sleep 30
done
EOF

chmod +x custom_metrics.sh

Data Retention Policies and Downsampling

# Bucket and data retention policy management
# Create short-term data bucket (1 day retention)
influx bucket create \
  --name realtime \
  --org myorg \
  --retention 24h

# Create medium-term data bucket (30 days retention)
influx bucket create \
  --name daily \
  --org myorg \
  --retention 720h

# Create long-term data bucket (1 year retention)
influx bucket create \
  --name monthly \
  --org myorg \
  --retention 8760h

# Verify data retention policies
influx bucket list --org myorg

# Automatic downsampling configuration with Tasks
influx task create --org myorg << 'EOF'
option task = {name: "Daily Downsampling", every: 1h}

from(bucket: "realtime")
  |> range(start: -2h, stop: -1h)
  |> filter(fn: (r) => r._measurement == "temperature" or r._measurement == "humidity")
  |> aggregateWindow(every: 1h, fn: mean)
  |> to(bucket: "daily", org: "myorg")
EOF

# Create monthly aggregation task
influx task create --org myorg << 'EOF'
option task = {name: "Monthly Aggregation", cron: "0 0 1 * *"}  // Run on 1st of each month

from(bucket: "daily")
  |> range(start: -30d)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> aggregateWindow(every: 1d, fn: mean)
  |> map(fn: (r) => ({r with _measurement: "temperature_daily_avg"}))
  |> to(bucket: "monthly", org: "myorg")
EOF

# List tasks
influx task list --org myorg

# Check task logs
influx task log list --task-id <task-id> --org myorg

# Manual data downsampling execution example
influx query '
from(bucket: "mybucket")
  |> range(start: -7d)
  |> filter(fn: (r) => r._measurement == "cpu_usage")
  |> aggregateWindow(every: 1h, fn: mean)
  |> to(bucket: "daily", org: "myorg")
' --org myorg

Performance Optimization and Monitoring

# InfluxDB configuration optimization
sudo tee -a /etc/influxdb/config.toml << 'EOF'
# Performance settings
http-bind-address = ":8086"
http-max-body-size = 25000000
http-max-concurrent-write-limit = 0
http-max-enqueued-write-limit = 0

# Storage engine settings
store = "tsi1"
engine-path = "/var/lib/influxdb/engine"
wal-dir = "/var/lib/influxdb/wal"
wal-fsync-delay = "0s"
max-series-per-database = 1000000
max-values-per-tag = 100000

# Cache settings
query-timeout = "0s"
log-queries-after = "0s"
max-select-point = 0
max-select-series = 0
max-select-buckets = 0
EOF

# System resource monitoring
# Check InfluxDB-specific metrics
curl -X POST "http://localhost:8086/api/v2/query?org=myorg" \
  -H "Authorization: Token mytoken123456789" \
  -H "Content-Type: application/vnd.flux" \
  --data '
from(bucket: "_monitoring")
  |> range(start: -1h)
  |> filter(fn: (r) => r._measurement == "influxdb_httpd")
  |> group(columns: ["_field"])
  |> last()
'

# Query performance analysis
influx query --profiler-mode=query '
from(bucket: "mybucket")
  |> range(start: -24h)
  |> filter(fn: (r) => r._measurement == "temperature")
  |> group(columns: ["host"])
  |> mean()
' --org myorg

# Bucket statistics information
influx bucket list --org myorg --json | jq '.[] | {name: .name, retentionRules: .retentionRules}'

# Index statistics
influx query '
import "influxdata/influxdb/monitor"

monitor.from(
  start: -1h,
  fn: (r) => r._measurement == "qc_all_duration_seconds"
)
  |> mean()
' --org myorg

# Disk usage monitoring
du -sh /var/lib/influxdb2/
df -h | grep influxdb

# Process monitoring
ps aux | grep influxd
netstat -tlnp | grep 8086

# Log monitoring
sudo journalctl -u influxdb -f
tail -f /var/log/influxdb/influxd.log

Application Integration Examples

# Python InfluxDB client
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import SYNCHRONOUS
from datetime import datetime, timezone
import time
import random

class InfluxDBManager:
    def __init__(self, url, token, org, bucket):
        """Initialize InfluxDB client"""
        self.client = InfluxDBClient(
            url=url,
            token=token,
            org=org,
            timeout=30000,
            enable_gzip=True
        )
        self.org = org
        self.bucket = bucket
        self.write_api = self.client.write_api(write_options=SYNCHRONOUS)
        self.query_api = self.client.query_api()
    
    def write_point(self, measurement, tags, fields, timestamp=None):
        """Write single point"""
        try:
            point = Point(measurement)
            
            # Add tags
            for key, value in tags.items():
                point = point.tag(key, value)
            
            # Add fields
            for key, value in fields.items():
                point = point.field(key, value)
            
            # Set timestamp
            if timestamp:
                point = point.time(timestamp, WritePrecision.NS)
            else:
                point = point.time(datetime.now(timezone.utc), WritePrecision.NS)
            
            self.write_api.write(bucket=self.bucket, org=self.org, record=point)
            return True
            
        except Exception as e:
            print(f"Write error: {e}")
            return False
    
    def write_batch(self, points_data):
        """Batch write"""
        try:
            points = []
            for data in points_data:
                point = Point(data['measurement'])
                
                for key, value in data['tags'].items():
                    point = point.tag(key, value)
                
                for key, value in data['fields'].items():
                    point = point.field(key, value)
                
                timestamp = data.get('timestamp', datetime.now(timezone.utc))
                point = point.time(timestamp, WritePrecision.NS)
                points.append(point)
            
            self.write_api.write(bucket=self.bucket, org=self.org, record=points)
            print(f"Batch write completed: {len(points)} points")
            return True
            
        except Exception as e:
            print(f"Batch write error: {e}")
            return False
    
    def query_flux(self, flux_query):
        """Execute Flux query"""
        try:
            result = self.query_api.query(org=self.org, query=flux_query)
            
            data = []
            for table in result:
                for record in table.records:
                    data.append({
                        'time': record.get_time(),
                        'measurement': record.get_measurement(),
                        'field': record.get_field(),
                        'value': record.get_value(),
                        'tags': {k: v for k, v in record.values.items() 
                                if k not in ['_time', '_measurement', '_field', '_value']}
                    })
            
            return data
            
        except Exception as e:
            print(f"Query error: {e}")
            return None
    
    def get_latest_values(self, measurement, time_range="-1h"):
        """Get latest values"""
        flux_query = f'''
        from(bucket: "{self.bucket}")
        |> range(start: {time_range})
        |> filter(fn: (r) => r._measurement == "{measurement}")
        |> last()
        '''
        return self.query_flux(flux_query)
    
    def get_aggregated_data(self, measurement, window="1h", func="mean", time_range="-24h"):
        """Get aggregated data"""
        flux_query = f'''
        from(bucket: "{self.bucket}")
        |> range(start: {time_range})
        |> filter(fn: (r) => r._measurement == "{measurement}")
        |> aggregateWindow(every: {window}, fn: {func})
        |> yield(name: "{func}")
        '''
        return self.query_flux(flux_query)
    
    def close(self):
        """Close connection"""
        self.client.close()

# Usage example and monitoring application
def generate_sample_metrics(influx_manager):
    """Generate sample metrics"""
    hosts = ['server01', 'server02', 'server03']
    regions = ['tokyo', 'osaka', 'nagoya']
    
    while True:
        try:
            # Generate CPU usage data
            cpu_points = []
            for host in hosts:
                for cpu_id in range(4):  # Assume 4 cores
                    cpu_points.append({
                        'measurement': 'cpu_usage',
                        'tags': {
                            'host': host,
                            'region': random.choice(regions),
                            'cpu': f'cpu{cpu_id}'
                        },
                        'fields': {
                            'usage_percent': random.uniform(10, 90),
                            'user': random.uniform(5, 40),
                            'system': random.uniform(2, 20),
                            'idle': random.uniform(20, 80)
                        }
                    })
            
            # Generate memory usage data
            memory_points = []
            for host in hosts:
                total_memory = 16 * 1024 * 1024 * 1024  # 16GB
                used = random.uniform(0.3, 0.8) * total_memory
                memory_points.append({
                    'measurement': 'memory_usage',
                    'tags': {
                        'host': host,
                        'region': random.choice(regions)
                    },
                    'fields': {
                        'used_bytes': used,
                        'available_bytes': total_memory - used,
                        'used_percent': (used / total_memory) * 100,
                        'cached_bytes': random.uniform(0.1, 0.3) * total_memory
                    }
                })
            
            # Generate temperature data
            temperature_points = []
            for host in hosts:
                temperature_points.append({
                    'measurement': 'temperature',
                    'tags': {
                        'host': host,
                        'region': random.choice(regions),
                        'sensor': 'cpu'
                    },
                    'fields': {
                        'celsius': random.uniform(30, 75),
                        'fahrenheit': random.uniform(86, 167)
                    }
                })
            
            # Batch write all points
            all_points = cpu_points + memory_points + temperature_points
            influx_manager.write_batch(all_points)
            
            print(f"Metrics sent: {len(all_points)} points - {datetime.now()}")
            time.sleep(10)  # 10 second interval
            
        except KeyboardInterrupt:
            print("\nStopping metrics generation")
            break
        except Exception as e:
            print(f"Metrics generation error: {e}")
            time.sleep(5)

if __name__ == "__main__":
    # InfluxDB connection configuration
    influx_config = {
        'url': 'http://localhost:8086',
        'token': 'mytoken123456789',
        'org': 'myorg',
        'bucket': 'mybucket'
    }
    
    # Initialize InfluxDB manager
    influx_manager = InfluxDBManager(**influx_config)
    
    try:
        # Get latest CPU usage
        latest_cpu = influx_manager.get_latest_values('cpu_usage')
        print(f"Latest CPU usage: {latest_cpu}")
        
        # Get average temperature for past 24 hours
        avg_temperature = influx_manager.get_aggregated_data(
            'temperature', 
            window='1h', 
            func='mean', 
            time_range='-24h'
        )
        print(f"Average temperature data count: {len(avg_temperature) if avg_temperature else 0}")
        
        # Start sample metrics generation
        print("Starting sample metrics generation (Ctrl+C to stop)")
        generate_sample_metrics(influx_manager)
        
    except Exception as e:
        print(f"Error: {e}")
    
    finally:
        influx_manager.close()