coloredlogs
Color output extension library for Python's standard library logging. Provides customizable color formatter using ANSI escape sequences, enabling color-coded display based on log levels. Maintains full compatibility with standard library.
GitHub Overview
xolox/python-coloredlogs
Colored terminal output for Python's logging module
Topics
Star History
Library
coloredlogs
Overview
coloredlogs is a Python library that provides colored terminal output for Python's standard logging module. It uses ANSI escape sequences to render log messages in color and employs only standard colors to ensure compatibility with any UNIX terminal. The library features automatic detection of Windows 10 native ANSI support and falls back to the colorama library when necessary. With simple installation and immediate usability, it significantly enhances visibility and debugging efficiency in development environments.
Details
coloredlogs 2025 edition has established itself as a practical solution for improving visibility and debugging efficiency in Python development environments. The ColoredFormatter class inherits from logging.Formatter and implements colored log output through ANSI escape sequences. Its intelligent terminal detection automatically disables ANSI sequences in non-interactive environments like cron jobs, preventing character corruption in email outputs. Windows 10 native ANSI support reduces dependency on colorama, achieving better lightweight performance.
Key Features
- ANSI Escape Sequences: Reliable cross-platform support using standard colors
- Intelligent Detection: Automatic color display control based on terminal environment
- Windows 10 Support: Automatic detection of native ANSI support with fallback
- Simple Setup: One-line configuration for immediate use
- Custom Formatter: Support for user-defined log formats
- System Log Integration: Integration capabilities with syslog and other system logs
Advantages and Disadvantages
Advantages
- Dramatic improvement in Python logging visibility with one-line setup
- Reliable cross-platform operation through ANSI escape sequences
- Environment adaptation via automatic interactive terminal detection
- Easy integration between libraries through full standard logging module compatibility
- Lightweight performance improvement through Windows 10 native support
- Operational safety through automatic color display disabling in cron jobs
Disadvantages
- Unnecessary processing overhead in environments where color display is not needed
- Colorama dependency required on older Windows systems
- Need to remove ANSI sequences in log aggregation systems in some cases
- Advanced log layout features inferior to other specialized libraries
- Performance constraints during high-volume log output
- Limited flexibility in custom color configuration
Reference Pages
Code Examples
Installation and Basic Setup
# Basic installation
pip install coloredlogs
# Installation with cron job support
pip install 'coloredlogs[cron]'
# Installation in Anaconda environment
conda install -c anaconda coloredlogs
# Virtual environment management
python -m venv venv
source venv/bin/activate # Linux/Mac
pip install coloredlogs
# Verification
python -c "import coloredlogs; print('coloredlogs installed successfully')"
One-line Configuration (Fastest Setup)
import coloredlogs
import logging
# One-line setup applied to root logger
coloredlogs.install(level='DEBUG')
# Standard logging module automatically displays colors from here
logging.debug("Debug message - displayed in gray")
logging.info("Info message - displayed in blue")
logging.warning("Warning message - displayed in yellow")
logging.error("Error message - displayed in red")
logging.critical("Critical error - displayed in bright red")
# Real application example
def main():
coloredlogs.install(level='INFO')
logging.info("Application started")
try:
# Business logic
process_data()
logging.info("Data processing completed")
except Exception as e:
logging.error(f"Data processing error: {e}")
logging.info("Application terminated")
def process_data():
logging.debug("Data processing started")
# Processing simulation
import time
time.sleep(1)
logging.debug("Processing data...")
time.sleep(1)
if __name__ == "__main__":
main()
Custom Logger Configuration
import coloredlogs
import logging
# Create dedicated logger
logger = logging.getLogger('MyApp')
# Apply coloredlogs to custom logger
coloredlogs.install(
level='DEBUG',
logger=logger,
fmt='%(asctime)s.%(msecs)03d %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s'
)
# Multiple logger management
class DatabaseManager:
def __init__(self):
self.logger = logging.getLogger('DatabaseManager')
coloredlogs.install(
level='INFO',
logger=self.logger,
fmt='[DB] %(asctime)s %(levelname)s: %(message)s'
)
def connect(self):
self.logger.info("Connecting to database...")
self.logger.debug("Setting connection parameters")
self.logger.info("Database connection successful")
def execute_query(self, query):
self.logger.debug(f"Executing query: {query}")
try:
# Query execution simulation
import time
time.sleep(0.5)
self.logger.info("Query execution completed")
return "result"
except Exception as e:
self.logger.error(f"Query execution error: {e}")
raise
class APIClient:
def __init__(self):
self.logger = logging.getLogger('APIClient')
coloredlogs.install(
level='WARNING', # API client shows only WARNING and above
logger=self.logger,
fmt='[API] %(levelname)s: %(message)s'
)
def make_request(self, url):
self.logger.debug(f"Sending request: {url}") # Not displayed due to WARNING+ level
self.logger.warning("API response time is slow")
self.logger.error("API connection error")
# Usage example
db = DatabaseManager()
db.connect()
db.execute_query("SELECT * FROM users")
api = APIClient()
api.make_request("https://api.example.com/data")
Environment-specific Configuration and Format Customization
import coloredlogs
import logging
import os
def setup_logging():
# Dynamic configuration via environment variables
log_level = os.getenv('LOG_LEVEL', 'INFO').upper()
# Environment-specific format settings
if os.getenv('ENVIRONMENT') == 'production':
# Production: detailed structured logs
log_format = '%(asctime)s %(hostname)s %(name)s[%(process)d] %(levelname)s %(funcName)s:%(lineno)d %(message)s'
elif os.getenv('ENVIRONMENT') == 'development':
# Development: simple and readable format
log_format = '%(levelname)s %(name)s: %(message)s'
else:
# Default: balanced format
log_format = '%(asctime)s %(levelname)s %(name)s: %(message)s'
# Custom color configuration
coloredlogs.install(
level=log_level,
fmt=log_format,
level_styles={
'critical': {'bold': True, 'color': 'red'},
'debug': {'color': 'green'},
'error': {'color': 'red'},
'info': {'color': 'blue'},
'notice': {'color': 'magenta'},
'spam': {'color': 'green', 'faint': True},
'success': {'bold': True, 'color': 'green'},
'verbose': {'color': 'blue'},
'warning': {'color': 'yellow'}
},
field_styles={
'asctime': {'color': 'green'},
'hostname': {'color': 'magenta'},
'levelname': {'bold': True, 'color': 'black'},
'name': {'color': 'blue'},
'programname': {'color': 'cyan'},
'username': {'color': 'yellow'}
}
)
# Millisecond precision configuration
def setup_precise_logging():
coloredlogs.install(
level='DEBUG',
milliseconds=True, # Enable millisecond display
fmt='%(asctime)s.%(msecs)03d %(levelname)s %(name)s: %(message)s'
)
# Conditional color configuration
def setup_conditional_logging():
# Disable color with NO_COLOR environment variable
if os.getenv('NO_COLOR'):
# Standard log configuration without colors
logging.basicConfig(
level='DEBUG',
format='%(asctime)s %(levelname)s %(name)s: %(message)s'
)
else:
# Colored log configuration
coloredlogs.install(level='DEBUG')
# Multiple configuration integration example
class LoggingConfig:
@staticmethod
def configure():
environment = os.getenv('ENVIRONMENT', 'development')
if environment == 'production':
LoggingConfig._setup_production()
elif environment == 'testing':
LoggingConfig._setup_testing()
else:
LoggingConfig._setup_development()
@staticmethod
def _setup_production():
coloredlogs.install(
level='WARNING',
fmt='%(asctime)s %(hostname)s %(name)s[%(process)d] %(levelname)s %(message)s',
milliseconds=True
)
@staticmethod
def _setup_testing():
coloredlogs.install(
level='DEBUG',
fmt='[TEST] %(levelname)s %(name)s: %(message)s'
)
@staticmethod
def _setup_development():
coloredlogs.install(
level='DEBUG',
fmt='%(levelname)s %(name)s.%(funcName)s:%(lineno)d - %(message)s'
)
# Usage example
LoggingConfig.configure()
logger = logging.getLogger(__name__)
logger.info("Configuration completed")
logger.debug("Debug information")
logger.warning("Warning message")
logger.error("Error message")
File Output and System Log Integration
import coloredlogs
import logging
import logging.handlers
import os
from datetime import datetime
def setup_comprehensive_logging():
# Main logger configuration
logger = logging.getLogger('MyApp')
logger.setLevel(logging.DEBUG)
# Console output (with colors)
coloredlogs.install(
level='DEBUG',
logger=logger,
fmt='%(asctime)s %(levelname)s %(name)s: %(message)s'
)
# File output handler (without colors)
log_dir = 'logs'
if not os.path.exists(log_dir):
os.makedirs(log_dir)
# Date-based log files
today = datetime.now().strftime('%Y-%m-%d')
file_handler = logging.FileHandler(f'{log_dir}/app-{today}.log')
file_handler.setLevel(logging.INFO)
file_formatter = logging.Formatter(
'%(asctime)s %(levelname)s %(name)s[%(process)d]: %(message)s'
)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
# Error-specific log file
error_handler = logging.FileHandler(f'{log_dir}/error-{today}.log')
error_handler.setLevel(logging.ERROR)
error_handler.setFormatter(file_formatter)
logger.addHandler(error_handler)
# Rotating file handler
rotating_handler = logging.handlers.RotatingFileHandler(
f'{log_dir}/app-rotating.log',
maxBytes=1024*1024, # 1MB
backupCount=5
)
rotating_handler.setLevel(logging.INFO)
rotating_handler.setFormatter(file_formatter)
logger.addHandler(rotating_handler)
return logger
# System log integration
def setup_system_logging():
# Syslog handler configuration
syslog_handler = logging.handlers.SysLogHandler(address='/dev/log')
syslog_formatter = logging.Formatter(
'MyApp[%(process)d]: %(levelname)s %(message)s'
)
syslog_handler.setFormatter(syslog_formatter)
# Main logger configuration
logger = logging.getLogger('MyApp')
logger.addHandler(syslog_handler)
# Color display for console
coloredlogs.install(
level='DEBUG',
logger=logger,
fmt='%(levelname)s %(name)s: %(message)s'
)
return logger
# HTTP server log integration
class WebAppLogging:
def __init__(self):
self.access_logger = logging.getLogger('access')
self.app_logger = logging.getLogger('app')
# Access log with structured format
coloredlogs.install(
level='INFO',
logger=self.access_logger,
fmt='[ACCESS] %(asctime)s %(message)s'
)
# Application log with detailed format
coloredlogs.install(
level='DEBUG',
logger=self.app_logger,
fmt='[APP] %(levelname)s %(funcName)s: %(message)s'
)
def log_request(self, method, path, status_code, response_time):
self.access_logger.info(
f"{method} {path} - {status_code} ({response_time}ms)"
)
def log_app_event(self, level, message):
getattr(self.app_logger, level.lower())(message)
# JSON structured log output
import json
class StructuredLogger:
def __init__(self, name):
self.logger = logging.getLogger(name)
# Console (colored, human-readable)
coloredlogs.install(
level='DEBUG',
logger=self.logger,
fmt='%(levelname)s %(name)s: %(message)s'
)
# File (JSON structured)
json_handler = logging.FileHandler('logs/structured.jsonl')
json_handler.setLevel(logging.INFO)
json_handler.setFormatter(self.JSONFormatter())
self.logger.addHandler(json_handler)
class JSONFormatter(logging.Formatter):
def format(self, record):
log_data = {
'timestamp': datetime.fromtimestamp(record.created).isoformat(),
'level': record.levelname,
'logger': record.name,
'module': record.module,
'function': record.funcName,
'line': record.lineno,
'message': record.getMessage()
}
if record.exc_info:
log_data['exception'] = self.formatException(record.exc_info)
return json.dumps(log_data, ensure_ascii=False)
def info(self, message, **kwargs):
if kwargs:
message = f"{message} - {json.dumps(kwargs, ensure_ascii=False)}"
self.logger.info(message)
def error(self, message, **kwargs):
if kwargs:
message = f"{message} - {json.dumps(kwargs, ensure_ascii=False)}"
self.logger.error(message)
# Usage example
if __name__ == "__main__":
# Comprehensive log configuration
main_logger = setup_comprehensive_logging()
# Test log output
main_logger.debug("Application initialization")
main_logger.info("Server started")
main_logger.warning("High memory usage detected")
main_logger.error("Database connection error")
# Web app log example
web_app = WebAppLogging()
web_app.log_request("GET", "/api/users", 200, 150)
web_app.log_app_event("info", "User authentication successful")
web_app.log_app_event("error", "API rate limit reached")
# Structured log example
struct_logger = StructuredLogger("StructuredApp")
struct_logger.info("User login", user_id=12345, ip="192.168.1.100")
struct_logger.error("Payment processing failed", order_id=67890, amount=1500, error_code="PAYMENT_DECLINED")
Performance Optimization and Benchmarking
import coloredlogs
import logging
import time
import sys
from contextlib import contextmanager
def performance_comparison():
"""Performance measurement of coloredlogs"""
# Standard log configuration
standard_logger = logging.getLogger('standard')
standard_handler = logging.StreamHandler(sys.stdout)
standard_formatter = logging.Formatter('%(levelname)s: %(message)s')
standard_handler.setFormatter(standard_formatter)
standard_logger.addHandler(standard_handler)
standard_logger.setLevel(logging.INFO)
# coloredlogs configuration
colored_logger = logging.getLogger('colored')
coloredlogs.install(
level='INFO',
logger=colored_logger,
fmt='%(levelname)s: %(message)s'
)
# Benchmark execution
iterations = 10000
# Standard log benchmark
start_time = time.time()
for i in range(iterations):
if i % 1000 == 0:
standard_logger.info(f"Standard log message {i}")
standard_time = time.time() - start_time
# coloredlogs benchmark
start_time = time.time()
for i in range(iterations):
if i % 1000 == 0:
colored_logger.info(f"Colored log message {i}")
colored_time = time.time() - start_time
print(f"\n=== Performance Comparison Results ===")
print(f"Standard log: {standard_time:.3f}s")
print(f"coloredlogs: {colored_time:.3f}s")
print(f"Overhead: {((colored_time - standard_time) / standard_time * 100):.1f}%")
# Environment optimization configuration
class OptimizedLogging:
@staticmethod
def setup_for_production():
"""Production environment optimization"""
# Minimal log level
coloredlogs.install(
level='WARNING',
fmt='%(asctime)s %(levelname)s: %(message)s'
)
@staticmethod
def setup_for_development():
"""Development environment optimization"""
# Detailed logs (information over performance)
coloredlogs.install(
level='DEBUG',
fmt='%(levelname)s %(name)s.%(funcName)s:%(lineno)d - %(message)s',
milliseconds=True
)
@staticmethod
def setup_for_testing():
"""Test environment optimization"""
# Configuration for easy test result viewing
coloredlogs.install(
level='INFO',
fmt='[TEST] %(levelname)s: %(message)s'
)
# Conditional logging optimization
class ConditionalLogging:
def __init__(self, logger_name, min_level='INFO'):
self.logger = logging.getLogger(logger_name)
self.min_level = getattr(logging, min_level.upper())
coloredlogs.install(
level=min_level,
logger=self.logger
)
def debug_if(self, condition, message):
"""Output debug log only when condition is true"""
if condition and self.logger.isEnabledFor(logging.DEBUG):
self.logger.debug(message)
def info_throttled(self, message, throttle_seconds=60):
"""Info output with throttling"""
if not hasattr(self, '_last_info_time'):
self._last_info_time = {}
now = time.time()
if message not in self._last_info_time or \
now - self._last_info_time[message] > throttle_seconds:
self.logger.info(message)
self._last_info_time[message] = now
@contextmanager
def timing_context(self, operation_name):
"""Processing time measurement context"""
start_time = time.time()
self.logger.debug(f"{operation_name} started")
try:
yield
finally:
duration = time.time() - start_time
self.logger.info(f"{operation_name} completed ({duration:.3f}s)")
# Memory-efficient log configuration
def setup_memory_efficient_logging():
"""Log configuration with minimal memory usage"""
# Limit buffer size
import io
class LimitedStringIO(io.StringIO):
def __init__(self, max_size=1024*1024): # 1MB limit
super().__init__()
self.max_size = max_size
def write(self, s):
if self.tell() + len(s) > self.max_size:
# Clear old data when buffer is full
self.seek(0)
self.truncate()
return super().write(s)
# Memory-efficient handler
memory_stream = LimitedStringIO()
memory_handler = logging.StreamHandler(memory_stream)
logger = logging.getLogger('memory_efficient')
logger.addHandler(memory_handler)
coloredlogs.install(
level='INFO',
logger=logger,
fmt='%(levelname)s: %(message)s'
)
return logger, memory_stream
# Bulk log processing optimization
class BulkLoggingOptimizer:
def __init__(self, logger_name):
self.logger = logging.getLogger(logger_name)
self.pending_logs = []
self.batch_size = 100
coloredlogs.install(
level='INFO',
logger=self.logger,
fmt='%(levelname)s: %(message)s'
)
def add_log(self, level, message):
"""Add log to batch"""
self.pending_logs.append((level, message))
if len(self.pending_logs) >= self.batch_size:
self.flush()
def flush(self):
"""Output accumulated logs in bulk"""
if not self.pending_logs:
return
# Group and output by log level
level_counts = {}
for level, message in self.pending_logs:
level_counts[level] = level_counts.get(level, 0) + 1
getattr(self.logger, level.lower())(message)
# Output statistics
stats = ", ".join([f"{level}: {count}" for level, count in level_counts.items()])
self.logger.info(f"Batch processing completed ({stats})")
self.pending_logs.clear()
# Usage example and benchmark execution
if __name__ == "__main__":
print("=== coloredlogs Performance Test ===")
# Performance comparison
performance_comparison()
# Optimization configuration test
print("\n=== Optimization Configuration Test ===")
OptimizedLogging.setup_for_development()
dev_logger = logging.getLogger('dev_test')
dev_logger.debug("Development environment test message")
dev_logger.info("Information message")
dev_logger.warning("Warning message")
# Conditional logging test
print("\n=== Conditional Logging Test ===")
conditional = ConditionalLogging('conditional_test')
conditional.debug_if(True, "Displayed because condition is true")
conditional.debug_if(False, "Not displayed because condition is false")
# Throttling test
for i in range(5):
conditional.info_throttled("Throttling test message", throttle_seconds=2)
time.sleep(0.5)
# Timing measurement test
with conditional.timing_context("Sample processing"):
time.sleep(1)
# Memory efficiency test
print("\n=== Memory Efficiency Test ===")
memory_logger, memory_stream = setup_memory_efficient_logging()
for i in range(100):
memory_logger.info(f"Memory efficiency test message {i}")
print(f"Memory stream size: {memory_stream.tell()} bytes")
# Bulk processing test
print("\n=== Bulk Processing Test ===")
bulk_optimizer = BulkLoggingOptimizer('bulk_test')
for i in range(250):
level = 'info' if i % 2 == 0 else 'debug'
bulk_optimizer.add_log(level, f"Bulk message {i}")
bulk_optimizer.flush() # Output remaining logs
print("\ncoloredlogs performance test completed")