log/slog (Standard Library)

Structured logging functionality introduced in Go 1.21 standard library. Provides production-ready high-performance logging features without external dependencies. Recommended choice for new projects due to excellent balance of features and simplicity.

loggingGostructured loggingstandard libraryJSONperformance

Library

log/slog

Overview

log/slog is Go's standard library package for structured logging, introduced in Go 1.21. It provides a structured approach to logging where log records include a message, severity level, and key-value pairs as attributes. Designed with performance and flexibility in mind, slog offers built-in JSON and text handlers, context integration, and dynamic log level control. As part of Go's standard library, it represents the modern, recommended approach for logging in Go applications, eliminating the need for third-party logging libraries in most use cases while providing excellent performance and rich structured logging capabilities.

Details

log/slog in 2025 has established itself as the definitive structured logging solution for Go applications. Built into the standard library since Go 1.21, it addresses the limitations of the traditional log package by providing structured logging capabilities that are essential for modern observability and log analysis. The package offers multiple built-in handlers (JSON and text), seamless context integration for distributed tracing, and performance optimizations including minimal allocations for common data types. With dynamic log level control through LevelVar, grouping capabilities for organizing attributes, and the Handler interface for custom implementations, slog provides enterprise-grade logging functionality without external dependencies.

Key Features

  • Structured Logging: Key-value pairs as attributes with message and severity level
  • Built-in Handlers: JSONHandler for machine processing, TextHandler for human readability
  • Standard Library: No external dependencies, guaranteed compatibility and stability
  • Context Integration: Native support for context.Context for distributed tracing
  • Dynamic Log Levels: Runtime log level adjustment through LevelVar
  • Performance Optimized: Minimal allocations and efficient attribute handling

Advantages and Disadvantages

Advantages

  • Standard library integration provides long-term stability and zero external dependencies
  • Superior performance with minimal allocations and efficient structured data handling
  • Built-in JSON handler enables seamless integration with modern log aggregation systems
  • Context integration facilitates distributed tracing and request correlation
  • Dynamic log level control allows runtime configuration without application restart
  • Handler interface enables custom logging implementations while maintaining compatibility

Disadvantages

  • Limited to Go 1.21+ requirement excludes older Go versions from adoption
  • Fewer advanced features compared to established third-party libraries like logrus or zap
  • Built-in handlers may be insufficient for complex formatting or output requirements
  • Learning curve for teams accustomed to traditional printf-style logging
  • Structured logging paradigm requires adjustment in existing codebases
  • Custom handler implementation needed for specialized logging requirements

Reference Pages

Code Examples

Installation and Basic Setup

// log/slog is part of Go standard library (Go 1.21+)
// No installation required, just import
import (
    "log/slog"
    "os"
    "context"
)

func main() {
    // Basic logger with text handler (human-readable)
    logger := slog.New(slog.NewTextHandler(os.Stdout, nil))
    
    // Basic log output
    logger.Debug("Application starting", "version", "1.0.0")
    logger.Info("Server started", "port", 8080)
    logger.Warn("High memory usage detected", "usage", "85%")
    logger.Error("Database connection failed", "error", "timeout")
    
    // JSON handler for structured output
    jsonLogger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    jsonLogger.Info("User action", "user_id", 12345, "action", "login")
    
    // Set as default logger
    slog.SetDefault(logger)
    slog.Info("Using default logger")
}

Handler Configuration and Log Levels

package main

import (
    "log/slog"
    "os"
    "time"
)

func setupLogging() {
    // Configure handler options
    opts := &slog.HandlerOptions{
        Level:     slog.LevelDebug,  // Minimum log level
        AddSource: true,             // Include source file information
        ReplaceAttr: func(groups []string, attr slog.Attr) slog.Attr {
            // Customize attribute formatting
            if attr.Key == "time" {
                return slog.Attr{
                    Key:   "timestamp",
                    Value: slog.StringValue(attr.Value.Time().Format(time.RFC3339)),
                }
            }
            return attr
        },
    }
    
    // Create logger with custom options
    logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))
    
    // Dynamic log level control
    var programLevel = new(slog.LevelVar) // Default: LevelInfo
    logger = slog.New(slog.NewJSONHandler(os.Stderr, &slog.HandlerOptions{
        Level: programLevel,
    }))
    
    // Change log level at runtime
    programLevel.Set(slog.LevelDebug)
    logger.Debug("Debug logging enabled")
    
    programLevel.Set(slog.LevelWarn)
    logger.Debug("This won't be printed")
    logger.Warn("Warning: only WARN and ERROR will be printed")
    
    // Test all log levels
    testLogLevels(logger)
}

func testLogLevels(logger *slog.Logger) {
    logger.Log(context.Background(), slog.LevelDebug, "Debug message with custom level")
    logger.Debug("Debug: Detailed information for debugging")
    logger.Info("Info: General information about application state")
    logger.Warn("Warn: Warning about potential issues")
    logger.Error("Error: Error occurred but application continues")
    
    // Custom log levels
    const LevelTrace = slog.Level(-8)
    const LevelFatal = slog.Level(12)
    
    logger.Log(context.Background(), LevelTrace, "Trace: Most detailed level")
    logger.Log(context.Background(), LevelFatal, "Fatal: Critical error")
}

func main() {
    setupLogging()
}

Context Integration and Distributed Tracing

package main

import (
    "context"
    "log/slog"
    "os"
    "time"
)

type RequestIDKey struct{}

func withRequestID(ctx context.Context, requestID string) context.Context {
    return context.WithValue(ctx, RequestIDKey{}, requestID)
}

func getRequestID(ctx context.Context) string {
    if id, ok := ctx.Value(RequestIDKey{}).(string); ok {
        return id
    }
    return "unknown"
}

// Custom handler that extracts context information
type ContextHandler struct {
    handler slog.Handler
}

func NewContextHandler(handler slog.Handler) *ContextHandler {
    return &ContextHandler{handler: handler}
}

func (h *ContextHandler) Enabled(ctx context.Context, level slog.Level) bool {
    return h.handler.Enabled(ctx, level)
}

func (h *ContextHandler) Handle(ctx context.Context, record slog.Record) error {
    // Extract request ID from context and add as attribute
    if requestID := getRequestID(ctx); requestID != "unknown" {
        record.AddAttrs(slog.String("request_id", requestID))
    }
    
    // Add timestamp if not present
    if record.Time.IsZero() {
        record.Time = time.Now()
    }
    
    return h.handler.Handle(ctx, record)
}

func (h *ContextHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
    return &ContextHandler{handler: h.handler.WithAttrs(attrs)}
}

func (h *ContextHandler) WithGroup(name string) slog.Handler {
    return &ContextHandler{handler: h.handler.WithGroup(name)}
}

func processRequest(ctx context.Context, logger *slog.Logger) {
    // Context-aware logging
    logger.InfoContext(ctx, "Processing request started")
    
    // Simulate processing steps
    logger.DebugContext(ctx, "Validating input parameters")
    time.Sleep(50 * time.Millisecond)
    
    logger.InfoContext(ctx, "Accessing database", "table", "users", "query_time", "23ms")
    time.Sleep(100 * time.Millisecond)
    
    logger.WarnContext(ctx, "Slow query detected", "duration", "156ms", "threshold", "100ms")
    
    logger.InfoContext(ctx, "Request processing completed", "total_time", "179ms")
}

func main() {
    // Create base handler
    baseHandler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
        Level: slog.LevelDebug,
    })
    
    // Wrap with context handler
    contextHandler := NewContextHandler(baseHandler)
    logger := slog.New(contextHandler)
    
    // Create context with request ID
    ctx := withRequestID(context.Background(), "req_12345")
    
    // Process multiple requests
    for i := 1; i <= 3; i++ {
        requestCtx := withRequestID(context.Background(), fmt.Sprintf("req_%05d", i))
        processRequest(requestCtx, logger)
    }
}

Attributes and Grouping

package main

import (
    "log/slog"
    "os"
    "time"
)

type User struct {
    ID    int
    Name  string
    Email string
}

func (u User) LogValue() slog.Value {
    // Custom LogValuer implementation for structured representation
    return slog.GroupValue(
        slog.Int("id", u.ID),
        slog.String("name", u.Name),
        slog.String("email", u.Email),
    )
}

func demonstrateAttributes() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    
    // Basic attributes
    logger.Info("User login",
        "user_id", 12345,
        "ip_address", "192.168.1.100",
        "user_agent", "Mozilla/5.0",
        "timestamp", time.Now(),
    )
    
    // Using slog.Attr for better performance
    logger.LogAttrs(context.Background(), slog.LevelInfo, "User registration",
        slog.Int("user_id", 12346),
        slog.String("email", "[email protected]"),
        slog.Bool("email_verified", false),
        slog.Duration("registration_time", 250*time.Millisecond),
    )
    
    // Grouping related attributes
    logger.Info("API request",
        slog.Group("request",
            slog.String("method", "POST"),
            slog.String("path", "/api/users"),
            slog.Int("status_code", 201),
        ),
        slog.Group("timing",
            slog.Duration("total", 150*time.Millisecond),
            slog.Duration("db_query", 45*time.Millisecond),
            slog.Duration("processing", 105*time.Millisecond),
        ),
    )
    
    // Using custom LogValuer
    user := User{ID: 12347, Name: "John Doe", Email: "[email protected]"}
    logger.Info("User profile updated", "user", user)
}

func demonstrateLoggerWith() {
    baseLogger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    
    // Create logger with common attributes
    requestLogger := baseLogger.With(
        "service", "user-service",
        "version", "1.2.3",
        "request_id", "req_789",
    )
    
    // All logs from this logger will include the common attributes
    requestLogger.Info("Processing user request")
    requestLogger.Debug("Validating request parameters")
    requestLogger.Error("Validation failed", "field", "email", "error", "invalid format")
    
    // Create grouped logger
    dbLogger := requestLogger.WithGroup("database")
    dbLogger.Info("Connecting to database", "host", "localhost", "port", 5432)
    dbLogger.Warn("Connection pool utilization high", "active", 95, "max", 100)
    
    // Nested groups
    queryLogger := dbLogger.WithGroup("query")
    queryLogger.Debug("Executing query",
        "sql", "SELECT * FROM users WHERE id = ?",
        "params", []int{12345},
        "duration", 23*time.Millisecond,
    )
}

func main() {
    demonstrateAttributes()
    demonstrateLoggerWith()
}

Custom Handlers and Advanced Usage

package main

import (
    "context"
    "encoding/json"
    "fmt"
    "io"
    "log/slog"
    "os"
    "sync"
    "time"
)

// Custom handler that writes to multiple outputs
type MultiHandler struct {
    handlers []slog.Handler
    mu       sync.Mutex
}

func NewMultiHandler(handlers ...slog.Handler) *MultiHandler {
    return &MultiHandler{handlers: handlers}
}

func (h *MultiHandler) Enabled(ctx context.Context, level slog.Level) bool {
    for _, handler := range h.handlers {
        if handler.Enabled(ctx, level) {
            return true
        }
    }
    return false
}

func (h *MultiHandler) Handle(ctx context.Context, record slog.Record) error {
    h.mu.Lock()
    defer h.mu.Unlock()
    
    for _, handler := range h.handlers {
        if handler.Enabled(ctx, record.Level) {
            if err := handler.Handle(ctx, record); err != nil {
                return err
            }
        }
    }
    return nil
}

func (h *MultiHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
    newHandlers := make([]slog.Handler, len(h.handlers))
    for i, handler := range h.handlers {
        newHandlers[i] = handler.WithAttrs(attrs)
    }
    return &MultiHandler{handlers: newHandlers}
}

func (h *MultiHandler) WithGroup(name string) slog.Handler {
    newHandlers := make([]slog.Handler, len(h.handlers))
    for i, handler := range h.handlers {
        newHandlers[i] = handler.WithGroup(name)
    }
    return &MultiHandler{handlers: newHandlers}
}

// Custom colored text handler
type ColoredTextHandler struct {
    *slog.TextHandler
    writer io.Writer
}

func NewColoredTextHandler(w io.Writer, opts *slog.HandlerOptions) *ColoredTextHandler {
    return &ColoredTextHandler{
        TextHandler: slog.NewTextHandler(w, opts),
        writer:      w,
    }
}

func (h *ColoredTextHandler) Handle(ctx context.Context, record slog.Record) error {
    // Color codes for different log levels
    var color string
    switch record.Level {
    case slog.LevelDebug:
        color = "\033[36m" // Cyan
    case slog.LevelInfo:
        color = "\033[32m" // Green
    case slog.LevelWarn:
        color = "\033[33m" // Yellow
    case slog.LevelError:
        color = "\033[31m" // Red
    default:
        color = "\033[0m" // Reset
    }
    
    // Add color prefix
    colored := fmt.Sprintf("%s[%s]\033[0m", color, record.Level.String())
    
    // Create a new record with colored level
    newRecord := slog.NewRecord(record.Time, record.Level, record.Message, record.PC)
    record.Attrs(func(attr slog.Attr) bool {
        newRecord.AddAttrs(attr)
        return true
    })
    
    // Format and write
    fmt.Fprintf(h.writer, "%s %s %s",
        record.Time.Format(time.RFC3339),
        colored,
        record.Message,
    )
    
    // Add attributes
    record.Attrs(func(attr slog.Attr) bool {
        fmt.Fprintf(h.writer, " %s=%v", attr.Key, attr.Value)
        return true
    })
    
    fmt.Fprint(h.writer, "\n")
    return nil
}

// Metrics handler that counts log messages
type MetricsHandler struct {
    handler slog.Handler
    counts  map[slog.Level]int64
    mu      sync.Mutex
}

func NewMetricsHandler(handler slog.Handler) *MetricsHandler {
    return &MetricsHandler{
        handler: handler,
        counts:  make(map[slog.Level]int64),
    }
}

func (h *MetricsHandler) Handle(ctx context.Context, record slog.Record) error {
    h.mu.Lock()
    h.counts[record.Level]++
    h.mu.Unlock()
    
    return h.handler.Handle(ctx, record)
}

func (h *MetricsHandler) GetCounts() map[slog.Level]int64 {
    h.mu.Lock()
    defer h.mu.Unlock()
    
    counts := make(map[slog.Level]int64)
    for level, count := range h.counts {
        counts[level] = count
    }
    return counts
}

func (h *MetricsHandler) Enabled(ctx context.Context, level slog.Level) bool {
    return h.handler.Enabled(ctx, level)
}

func (h *MetricsHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
    return &MetricsHandler{
        handler: h.handler.WithAttrs(attrs),
        counts:  h.counts,
    }
}

func (h *MetricsHandler) WithGroup(name string) slog.Handler {
    return &MetricsHandler{
        handler: h.handler.WithGroup(name),
        counts:  h.counts,
    }
}

func demonstrateCustomHandlers() {
    // Create multiple output destinations
    jsonFile, _ := os.Create("app.json")
    defer jsonFile.Close()
    
    textFile, _ := os.Create("app.log")
    defer textFile.Close()
    
    // Create multiple handlers
    jsonHandler := slog.NewJSONHandler(jsonFile, &slog.HandlerOptions{Level: slog.LevelInfo})
    textHandler := slog.NewTextHandler(textFile, &slog.HandlerOptions{Level: slog.LevelDebug})
    coloredHandler := NewColoredTextHandler(os.Stdout, &slog.HandlerOptions{Level: slog.LevelInfo})
    
    // Combine handlers
    multiHandler := NewMultiHandler(jsonHandler, textHandler, coloredHandler)
    metricsHandler := NewMetricsHandler(multiHandler)
    
    logger := slog.New(metricsHandler)
    
    // Generate various log messages
    logger.Debug("Debug message - only in text file")
    logger.Info("Application started", "version", "1.0.0")
    logger.Warn("Memory usage high", "usage", "85%")
    logger.Error("Database connection failed", "error", "connection timeout")
    logger.Info("Request processed", "duration", 150*time.Millisecond)
    
    // Display metrics
    counts := metricsHandler.GetCounts()
    fmt.Println("\nLog message counts:")
    for level, count := range counts {
        fmt.Printf("%s: %d\n", level, count)
    }
}

func main() {
    demonstrateCustomHandlers()
}

Performance Optimization and Best Practices

package main

import (
    "context"
    "fmt"
    "log/slog"
    "os"
    "runtime"
    "sync"
    "time"
)

// Expensive computation that should be deferred
func expensiveOperation() string {
    time.Sleep(10 * time.Millisecond) // Simulate expensive operation
    return "expensive result"
}

// LogValuer implementation to defer expensive computation
type ExpensiveData struct {
    computed bool
    result   string
}

func (e *ExpensiveData) LogValue() slog.Value {
    if !e.computed {
        e.result = expensiveOperation()
        e.computed = true
    }
    return slog.StringValue(e.result)
}

func performanceBenchmark() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
        Level: slog.LevelInfo, // DEBUG messages won't be processed
    }))
    
    iterations := 100000
    
    // Benchmark 1: Using string formatting (inefficient)
    start := time.Now()
    for i := 0; i < iterations; i++ {
        logger.Debug(fmt.Sprintf("Debug message %d with expensive: %s", i, expensiveOperation()))
    }
    fmt.Printf("String formatting: %v (operations skipped due to log level)\n", time.Since(start))
    
    // Benchmark 2: Using LogValuer (efficient)
    start = time.Now()
    for i := 0; i < iterations; i++ {
        expensiveData := &ExpensiveData{}
        logger.Debug("Debug message with deferred expensive operation", "data", expensiveData)
    }
    fmt.Printf("LogValuer (deferred): %v (operations skipped due to log level)\n", time.Since(start))
    
    // Benchmark 3: Using LogAttrs (most efficient for structured data)
    start = time.Now()
    for i := 0; i < iterations; i++ {
        logger.LogAttrs(context.Background(), slog.LevelDebug, "Debug message",
            slog.Int("iteration", i),
            slog.String("type", "benchmark"),
        )
    }
    fmt.Printf("LogAttrs: %v (operations skipped due to log level)\n", time.Since(start))
    
    // Benchmark 4: Enabled logging for comparison
    start = time.Now()
    for i := 0; i < 1000; i++ { // Fewer iterations for actual logging
        logger.LogAttrs(context.Background(), slog.LevelInfo, "Info message",
            slog.Int("iteration", i),
            slog.String("type", "benchmark"),
        )
    }
    fmt.Printf("Actual logging (1000 iterations): %v\n", time.Since(start))
}

func demonstrateMemoryOptimization() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    
    // Memory efficient: reuse slog.Attr values
    attrs := []slog.Attr{
        slog.String("service", "user-service"),
        slog.String("version", "1.0.0"),
    }
    
    for i := 0; i < 1000; i++ {
        // Append request-specific attributes
        requestAttrs := append(attrs,
            slog.Int("request_id", i),
            slog.Duration("processing_time", time.Duration(i)*time.Millisecond),
        )
        
        logger.LogAttrs(context.Background(), slog.LevelInfo, "Request processed", requestAttrs...)
    }
}

func concurrentLogging() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    
    var wg sync.WaitGroup
    numGoroutines := 10
    messagesPerGoroutine := 100
    
    start := time.Now()
    
    for i := 0; i < numGoroutines; i++ {
        wg.Add(1)
        go func(goroutineID int) {
            defer wg.Done()
            
            for j := 0; j < messagesPerGoroutine; j++ {
                logger.Info("Concurrent log message",
                    "goroutine", goroutineID,
                    "message", j,
                    "timestamp", time.Now(),
                )
            }
        }(i)
    }
    
    wg.Wait()
    duration := time.Since(start)
    
    totalMessages := numGoroutines * messagesPerGoroutine
    fmt.Printf("Concurrent logging: %d messages in %v (%.0f msg/sec)\n",
        totalMessages, duration, float64(totalMessages)/duration.Seconds())
}

func memoryUsageExample() {
    var m1, m2 runtime.MemStats
    
    runtime.GC()
    runtime.ReadMemStats(&m1)
    
    logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
    
    // Generate many log messages
    for i := 0; i < 10000; i++ {
        logger.Info("Memory usage test",
            "iteration", i,
            "data", fmt.Sprintf("test-data-%d", i),
            "timestamp", time.Now(),
        )
    }
    
    runtime.GC()
    runtime.ReadMemStats(&m2)
    
    fmt.Printf("Memory used: %d KB\n", (m2.Alloc-m1.Alloc)/1024)
    fmt.Printf("Total allocations: %d\n", m2.TotalAlloc-m1.TotalAlloc)
}

func bestPracticesExample() {
    logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
        Level: slog.LevelInfo,
    }))
    
    // Best Practice 1: Use structured logging consistently
    logger.Info("User logged in",
        "user_id", 12345,
        "session_id", "sess_abc123",
        "ip_address", "192.168.1.100",
    )
    
    // Best Practice 2: Use appropriate log levels
    logger.Debug("Detailed debugging information") // Won't be logged due to level
    logger.Info("General application flow")
    logger.Warn("Something unexpected but not critical")
    logger.Error("Error that needs attention")
    
    // Best Practice 3: Use context for request correlation
    ctx := context.WithValue(context.Background(), "request_id", "req_12345")
    logger.InfoContext(ctx, "Processing request")
    
    // Best Practice 4: Group related attributes
    logger.Info("API response",
        slog.Group("request",
            slog.String("method", "GET"),
            slog.String("path", "/api/users"),
        ),
        slog.Group("response",
            slog.Int("status", 200),
            slog.Duration("duration", 150*time.Millisecond),
        ),
    )
    
    // Best Practice 5: Use LogAttrs for performance-critical code
    logger.LogAttrs(context.Background(), slog.LevelInfo, "High-frequency event",
        slog.Int64("counter", 123456789),
        slog.Float64("value", 98.6),
        slog.Bool("active", true),
    )
}

func main() {
    fmt.Println("=== Performance Benchmark ===")
    performanceBenchmark()
    
    fmt.Println("\n=== Memory Optimization ===")
    demonstrateMemoryOptimization()
    
    fmt.Println("\n=== Concurrent Logging ===")
    concurrentLogging()
    
    fmt.Println("\n=== Memory Usage ===")
    memoryUsageExample()
    
    fmt.Println("\n=== Best Practices ===")
    bestPracticesExample()
}