BigCache

GoLibraryCacheHigh ThroughputLow GC PressureMemory Efficient

GitHub Overview

allegro/bigcache

Efficient cache for gigabytes of data written in Go.

Stars7,982
Watchers109
Forks604
Created:March 23, 2016
Language:Go
License:Apache License 2.0

Topics

cachecachinggolang-libraryhacktoberfestperformance

Star History

allegro/bigcache Star History
Data as of: 10/22/2025, 08:06 AM

Library

BigCache

Overview

BigCache is a high-throughput Go cache library supporting millions of entries. It's designed to minimize garbage collection (GC) pressure and achieve efficient memory usage for large datasets.

Details

BigCache is an in-memory cache that leverages Go's garbage collection optimizations to handle gigabytes of data with high performance. It utilizes optimization introduced in Go 1.5 (issue-9477), using map[uint64]uint32 without pointers in keys and values, which prevents GC from scanning the map's content. Entries are stored in large byte slices, and GC only sees a single pointer per shard, allowing millions of entries without impacting GC performance. It provides high concurrency through sharding, where each shard has an independent mutex, enabling parallel operations on different shards. Supports dynamic sizing, configurable eviction policies, callbacks, and statistics, enabling efficient management of large-capacity datasets.

Advantages and Disadvantages

Advantages

  • Low GC Pressure: GC pause under 1.5ms even with 20 million entries
  • High Throughput: 3 million items per second access performance
  • Memory Efficient: 5-7% low memory overhead
  • High Concurrency: Parallel access optimization through sharding
  • Dynamic Sizing: No need to determine cache size in advance
  • Configurable: Eviction policies and callback functionality
  • Statistics: Hit/miss rates and collision monitoring

Disadvantages

  • No Collision Handling: New items overwrite existing values during hash collisions
  • Go Only: Cannot be used outside of Go language
  • Complexity: Excessive features for simple use cases
  • Memory Constraints: Requires all data to be held in memory
  • No Persistence: Data loss on application restart

Key Links

Code Examples

Basic Usage

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/allegro/bigcache"
)

func main() {
    // Default configuration with 10-minute cache
    cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10*time.Minute))
    defer cache.Close()

    // Set data
    cache.Set("my-unique-key", []byte("Hello, BigCache!"))

    // Get data
    entry, _ := cache.Get("my-unique-key")
    fmt.Println(string(entry)) // "Hello, BigCache!"
}

Custom Configuration

package main

import (
    "context"
    "log"
    "time"

    "github.com/allegro/bigcache"
)

func main() {
    config := bigcache.Config{
        Shards:             1024,        // Number of shards (power of 2)
        LifeWindow:         10 * time.Minute, // Item lifetime
        CleanWindow:        5 * time.Minute,  // Cleanup interval
        MaxEntriesInWindow: 1000 * 10 * 60,   // Max entries in window
        MaxEntrySize:       500,         // Max entry size (bytes)
        HardMaxCacheSize:   8192,        // Max cache size (MB)
        StatsEnabled:       true,        // Enable statistics
        Verbose:            true,        // Enable verbose logging
    }

    cache, err := bigcache.New(context.Background(), config)
    if err != nil {
        log.Fatal(err)
    }
    defer cache.Close()

    // Usage example
    cache.Set("user:1", []byte(`{"name":"Alice","age":30}`))
    cache.Set("user:2", []byte(`{"name":"Bob","age":25}`))
}

Cache with Callbacks

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/allegro/bigcache"
)

func main() {
    config := bigcache.Config{
        Shards:     1024,
        LifeWindow: 1 * time.Minute,
        OnRemove: func(key string, entry []byte) {
            fmt.Printf("Key %s removed: %s\n", key, string(entry))
        },
        OnRemoveWithReason: func(key string, entry []byte, reason bigcache.RemoveReason) {
            fmt.Printf("Key %s removed (reason: %v): %s\n", key, reason, string(entry))
        },
    }

    cache, _ := bigcache.New(context.Background(), config)
    defer cache.Close()

    cache.Set("temp-key", []byte("temporary data"))
    
    // Callback will be called after 1 minute
    time.Sleep(70 * time.Second)
}

Statistics Retrieval

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/allegro/bigcache"
)

func main() {
    config := bigcache.DefaultConfig(10 * time.Minute)
    config.StatsEnabled = true

    cache, _ := bigcache.New(context.Background(), config)
    defer cache.Close()

    // Perform some operations
    cache.Set("key1", []byte("value1"))
    cache.Set("key2", []byte("value2"))
    cache.Get("key1") // Hit
    cache.Get("key3") // Miss

    // Get statistics
    stats := cache.Stats()
    fmt.Printf("Hits: %d\n", stats.Hits)
    fmt.Printf("Misses: %d\n", stats.Misses)
    fmt.Printf("Collisions: %d\n", stats.Collisions)
    fmt.Printf("DelHits: %d\n", stats.DelHits)
    fmt.Printf("DelMisses: %d\n", stats.DelMisses)
}

Entry Processing with Iterator

package main

import (
    "context"
    "fmt"
    "time"

    "github.com/allegro/bigcache"
)

func main() {
    cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10*time.Minute))
    defer cache.Close()

    // Add sample data
    for i := 0; i < 10; i++ {
        key := fmt.Sprintf("key:%d", i)
        value := fmt.Sprintf("value:%d", i)
        cache.Set(key, []byte(value))
    }

    // Process entries with iterator
    iterator := cache.Iterator()
    for iterator.SetNext() {
        current, err := iterator.Value()
        if err != nil {
            continue
        }
        
        fmt.Printf("Key: %s, Value: %s\n", 
            current.Key(), string(current.Value()))
    }
}

Error Handling and Reset

package main

import (
    "context"
    "fmt"

    "github.com/allegro/bigcache"
)

func main() {
    cache, _ := bigcache.New(context.Background(), bigcache.DefaultConfig(10*time.Minute))
    defer cache.Close()

    // Set data
    cache.Set("test-key", []byte("test-value"))

    // Get data with error handling
    value, err := cache.Get("test-key")
    if err != nil {
        if err == bigcache.ErrEntryNotFound {
            fmt.Println("Entry not found")
        } else {
            fmt.Printf("Error: %v\n", err)
        }
    } else {
        fmt.Printf("Value: %s\n", string(value))
    }

    // Delete specific key
    err = cache.Delete("test-key")
    if err != nil {
        fmt.Printf("Delete error: %v\n", err)
    }

    // Reset entire cache
    err = cache.Reset()
    if err != nil {
        fmt.Printf("Reset error: %v\n", err)
    }
}