Loguru
Most popular third-party logging library for Python (15,000+ GitHub stars). Simple and intuitive API with rich pre-configured features enables complex logging setup with minimal code. Provides colorful console output and detailed stack trace display.
Library
Loguru
Overview
Loguru is an innovative logging library for Python with the concept of being "stupidly simple". It eliminates the complex configuration processes of Python's standard library logging module and provides out-of-the-box convenience. It comes standard-equipped with automatic JSON conversion, structured logging, colored output, automatic rotation, exception tracing, and asynchronous logging, enabling developers to immediately output high-quality logs.
Details
Loguru 0.7.2 is a next-generation logging library rapidly gaining attention in the Python ecosystem as of 2025. It fundamentally solves the configuration complexity, handler management complexity, and formatter configuration redundancy that the standard library logging module suffers from, achieving a "import → immediate use" development experience. With automatic JSON serialization, structured data support, rich exception tracing, and automatic file rotation features, it fully supports modern observability requirements.
Key Features
- Zero-Config Ready: Start outputting rich logs immediately after import without any configuration
- Automatic Structured Logging: Automatic JSON output and intuitive structured data support
- Sink-based Architecture: Flexible support for files, console, network, and custom output destinations
- Rich Exception Handling: Automatic acquisition of detailed error information through backtrace and diagnose
- Smart Rotation: Automatic file management by size, time, and retention policies
- Performance Optimization: Asynchronous logging with enqueue=True, lazy evaluation support
Pros and Cons
Pros
- Achieves 95%+ reduction in configuration effort compared to Python standard library logging module
- Full-featured logging system available immediately with single line import
- Overwhelming debugging efficiency improvement through backtrace and diagnostic information
- Immediate integration with modern observability tools (ELK Stack, Grafana, etc.) through automatic JSON output
- Intuitive context information addition through bind() method
- High-performance bulk log processing through asynchronous logging and worker threads
Cons
- Requires additional dependency as it's not part of standard library (pip install loguru)
- Migration costs occur in existing logging module-based codebases
- Direct compatibility limitations with rich logging ecosystem custom handlers
- Security risks when outputting detailed information with diagnose=True (caution in production)
- Potentially overkill for extremely simple logging needs
- Conflicts with enterprise preference for standard library policies
References
Code Examples
Installation and Basic Setup
# Install Loguru
pip install loguru
# For development environment (including optional dependencies)
pip install "loguru[dev]"
# Simplest logging start - no configuration needed!
from loguru import logger
logger.debug("That's it, beautiful and simple logging!")
logger.info("Hello, World!")
logger.warning("Watch out!")
logger.error("Something went wrong")
logger.critical("This is critical!")
Basic Logging Operations (Levels, Formatting)
import sys
from loguru import logger
# Customize default handler
logger.remove() # Remove all existing handlers
logger.add(sys.stderr, format="{time} - {level} - {message}")
# Add file output
logger.add("app.log", level="INFO", rotation="500 MB")
# Different levels for log output
logger.add("debug.log", level="DEBUG", format="{time:YYYY-MM-DD at HH:mm:ss} | {level} | {message}")
logger.add("error.log", level="ERROR", retention="10 days", compression="zip")
# Execute logs by level
logger.debug("Detailed debugging information")
logger.info("General information about application progress")
logger.warning("Something unexpected happened")
logger.error("An error occurred but application continues")
logger.critical("A critical error occurred")
# Support for modern format notation (braces)
logger.info("Processing user {} with ID {}", "john_doe", 12345)
logger.info("Python version: {version}, feature: {feature}", version=3.9, feature="f-strings")
Advanced Configuration and Customization (Sinks, Filters)
from loguru import logger
import sys
# Configure multiple sinks
logger.remove()
logger.add(
"application.log",
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {name}:{function}:{line} | {message}",
level="INFO",
rotation="12:00", # Rotate daily at noon
retention="30 days", # Keep for 30 days
compression="zip", # Compressed storage
enqueue=True # Asynchronous processing
)
# Error-specific log file
logger.add(
"errors.log",
level="ERROR",
format="{time} | {level} | {extra[user_id]} | {message}",
filter=lambda record: record["level"].no >= 40 # ERROR and above only
)
# Custom filter function
def important_only(record):
return "important" in record["extra"]
logger.add("important.log", filter=important_only)
# Conditional logging
logger.add(
sys.stdout,
format="<green>{time}</green> | <level>{level}</level> | <blue>{message}</blue>",
level="DEBUG",
filter=lambda record: record["extra"].get("environment") == "development"
)
# Custom sink function
def database_sink(message):
record = message.record
# Save to database process
print(f"Saving to DB: {record['level']} - {record['message']}")
logger.add(database_sink, level="WARNING")
Structured Logging and Modern Observability Support
from loguru import logger
import json
from datetime import datetime
# Structured logging configuration in JSON format
logger.add("structured.log", serialize=True, level="INFO")
# Add context information using bind()
context_logger = logger.bind(
user_id="user_123",
session_id="sess_456",
service="web-api"
)
context_logger.info("User login successful")
context_logger.bind(action="purchase").info("Order completed", order_id=789)
# Temporary context with contextualize()
with logger.contextualize(request_id="req_001"):
logger.info("Processing request")
# Multiple processes...
logger.info("Request completed")
# Global information addition with patch()
logger = logger.patch(lambda record: record["extra"].update(
timestamp=datetime.utcnow().isoformat(),
hostname="web-server-01",
version="1.2.3"
))
# Object processing with custom serializer
def custom_serializer(obj):
if isinstance(obj, datetime):
return obj.isoformat()
raise TypeError(f"Object of type {type(obj)} is not JSON serializable")
# Structured logging for ELK Stack
def elk_stack_format(record):
return json.dumps({
"@timestamp": record["time"].isoformat(),
"level": record["level"].name,
"logger": record["name"],
"message": record["message"],
"extra": record["extra"],
"function": record["function"],
"line": record["line"]
}, default=custom_serializer) + "\n"
logger.add("elk-logs.jsonl", format=elk_stack_format, level="INFO")
Error Handling and Performance Optimization
from loguru import logger
import sys
# High-performance asynchronous logging configuration
logger.add(
"performance.log",
enqueue=True, # Asynchronous processing using queue
level="INFO"
)
# Exception handling and backtrace
logger.add("debug.log", backtrace=True, diagnose=True)
def divide_numbers(a, b):
try:
result = a / b
logger.info("Division successful: {} / {} = {}", a, b, result)
return result
except ZeroDivisionError:
logger.exception("Division by zero error occurred")
raise
except Exception as e:
logger.error("Unexpected error in division: {}", str(e))
raise
# Performance optimization with lazy evaluation
def expensive_operation():
# Heavy computation process
return "expensive_result"
# expensive_operation() is executed only when DEBUG level is enabled
logger.opt(lazy=True).debug("Debug info: {}", expensive_operation)
# Utilizing opt() method
logger.opt(colors=True).info("Success: <green>Operation completed</green>")
logger.opt(raw=True).info("Raw message without formatting\n")
logger.opt(depth=1).info("Parent stack context")
logger.opt(exception=True).info("Include exception traceback")
# Logging with retry functionality
import time
import random
async def retry_with_logging(operation, max_retries=3):
for attempt in range(max_retries):
try:
logger.bind(attempt=attempt + 1).info("Attempting operation")
result = await operation()
logger.success("Operation succeeded on attempt {}", attempt + 1)
return result
except Exception as e:
logger.bind(attempt=attempt + 1).warning(
"Operation failed: {} (attempt {}/{})",
str(e), attempt + 1, max_retries
)
if attempt == max_retries - 1:
logger.error("Operation failed after {} attempts", max_retries)
raise
time.sleep(2 ** attempt) # Exponential backoff
Framework Integration and Practical Examples
from loguru import logger
import sys
from pathlib import Path
# FastAPI integration example
def setup_logging_for_fastapi():
"""Loguru setup for FastAPI"""
# Configuration at FastAPI startup
logger.remove()
# Development environment configuration
logger.add(
sys.stdout,
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | "
"<level>{level: <8}</level> | "
"<cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | "
"{message}",
level="DEBUG"
)
# Production environment configuration
logger.add(
"logs/app.log",
format="{time:YYYY-MM-DD HH:mm:ss} | {level} | {name}:{function}:{line} | {message}",
level="INFO",
rotation="100 MB",
retention="30 days",
compression="zip",
enqueue=True
)
# Django integration example
def setup_logging_for_django():
"""Loguru setup for Django"""
from django.conf import settings
# Disable Django's standard logging
import logging
logging.disable(logging.CRITICAL)
logger.remove()
# Capture Django logs
class DjangoLoguruInterceptHandler(logging.Handler):
def emit(self, record):
logger_opt = logger.opt(depth=6, exception=record.exc_info)
logger_opt.log(record.levelname, record.getMessage())
logging.basicConfig(handlers=[DjangoLoguruInterceptHandler()], level=0)
# Loguru configuration
logger.add(
settings.BASE_DIR / "logs" / "django.log",
level="INFO",
format="{time} | {level} | {name}:{function}:{line} | {message}",
rotation="50 MB",
serialize=True
)
# Flask integration example
def setup_logging_for_flask(app):
"""Loguru setup for Flask"""
import logging
# Intercept Flask's logger
class InterceptHandler(logging.Handler):
def emit(self, record):
logger_opt = logger.opt(depth=6, exception=record.exc_info)
logger_opt.log(record.levelno, record.getMessage())
app.logger.addHandler(InterceptHandler())
# Also capture Werkzeug logs
logging.getLogger("werkzeug").addHandler(InterceptHandler())
logger.add(
"flask_app.log",
level="INFO",
format="{time} | {level} | {extra[ip]} | {message}",
filter=lambda record: "ip" in record["extra"]
)
# Log parsing and analysis
def parse_log_files():
"""Log file parsing example"""
import re
from datetime import datetime
# Define log pattern
pattern = r"(?P<time>.*?) \| (?P<level>.*?) \| (?P<message>.*)"
# Custom caster
def parse_time(time_str):
return datetime.fromisoformat(time_str.replace("Z", "+00:00"))
caster_dict = dict(time=parse_time, level=str)
# Execute log analysis
for groups in logger.parse("application.log", pattern, cast=caster_dict):
if groups["level"] == "ERROR":
logger.warning("Found error log: {}", groups["message"])
# Usage example
if __name__ == "__main__":
# Basic usage
setup_logging_for_fastapi()
logger.info("Application started")
# Context-aware logging
with logger.contextualize(user_id="user_001"):
logger.info("User operation started")
divide_numbers(10, 2)
logger.info("User operation completed")
logger.success("Application setup completed")
Custom Levels and Advanced Features
from loguru import logger
# Define custom log levels
logger.level("AUDIT", no=35, color="<yellow>", icon="📋")
logger.level("BUSINESS", no=25, color="<blue>", icon="💼")
# Use custom levels
logger.log("AUDIT", "User action recorded")
logger.log("BUSINESS", "Business metric updated")
# Progress bar-style logging
def log_progress_inline():
logger.bind(end="").debug("Progress: ")
for i in range(5):
logger.opt(raw=True).debug(".")
time.sleep(0.5)
logger.opt(raw=True).debug(" Done!\n")
# Notification integration (Apprise usage example)
try:
import apprise
# Discord notification setup
WEBHOOK_ID = "123456790"
WEBHOOK_TOKEN = "abc123def456"
notifier = apprise.Apprise()
notifier.add(f"discord://{WEBHOOK_ID}/{WEBHOOK_TOKEN}")
# Automatic notification on errors
logger.add(
notifier.notify,
level="ERROR",
filter=lambda record: "apprise" not in record["name"]
)
except ImportError:
logger.warning("Apprise not available, notifications disabled")
# Log routing
def setup_log_routing():
"""Context-based log routing"""
# Service-specific log files
logger.add(
"service_a.log",
filter=lambda record: record["extra"].get("service") == "A"
)
logger.add(
"service_b.log",
filter=lambda record: record["extra"].get("service") == "B"
)
# Usage example
logger_a = logger.bind(service="A")
logger_b = logger.bind(service="B")
logger_a.info("Service A operation")
logger_b.info("Service B operation")
# Memory-efficient log rotation
logger.add(
"memory_efficient.log",
rotation="10 MB",
retention=5, # Keep maximum 5 files
compression="gz",
enqueue=True,
catch=True # Suppress log errors
)