FreeCache

cache libraryGoGolangin-memory cachehigh performancezero GC

GitHub Overview

coocood/freecache

A cache library for Go with zero GC overhead.

Stars5,309
Watchers112
Forks401
Created:April 29, 2015
Language:Go
License:MIT License

Topics

None

Star History

coocood/freecache Star History
Data as of: 10/22/2025, 08:07 AM

Cache Library

FreeCache

Overview

FreeCache is a Go memory cache library designed with zero GC overhead. It solves expensive GC overhead introduced by long-lived objects, enabling unlimited objects to be cached in memory without increased latency and degraded throughput.

Details

FreeCache avoids GC overhead by reducing the number of pointers - no matter how many entries are stored, there are only 512 pointers. The data is sharded into 256 segments by hash value, with each segment having only two pointers: one for the ring buffer storing keys/values, and another for the index slice for lookups.

Each segment has its own lock, supporting high concurrent access. In multi-threaded environments, FreeCache should be many times faster than single lock protected built-in maps. Performance-wise, Set operations are about 2x faster than built-in map, while Get operations are about 1/2x slower than built-in map.

Pros and Cons

Pros

  • Zero GC overhead design for high performance
  • High concurrency support (segment-based locking mechanism)
  • Set operations about 2x faster than built-in map
  • Suitable for large-scale caches up to 5 million entries
  • Built-in expiration support (TTL functionality)
  • Runtime cache size resizing support
  • File dump/load functionality
  • Ideal for latency-sensitive Go applications

Cons

  • Get operations about 1/2x slower than built-in map
  • Memory is preallocated, requires debug.SetGCPercent() adjustment for large allocations
  • Minimum cache size is 512KB
  • Uses byte slices as keys, less convenient than string keys
  • No explicit entry deletion or update functionality

Reference Links

Code Examples

Basic Setup

package main

import (
    "fmt"
    "runtime/debug"
    "github.com/coocood/freecache"
)

func main() {
    // Create 100MB cache
    cacheSize := 100 * 1024 * 1024
    cache := freecache.NewCache(cacheSize)
    
    // Adjust GC frequency for large memory usage
    debug.SetGCPercent(20)
}

Basic Set/Get Operations

// Set key and value
key := []byte("user:123")
value := []byte(`{"name":"John","age":30}`)
expire := 60 // Expire in 60 seconds

err := cache.Set(key, value, expire)
if err != nil {
    fmt.Printf("Set error: %v\n", err)
}

// Get value
result, err := cache.Get(key)
if err != nil {
    fmt.Printf("Get error: %v\n", err)
} else {
    fmt.Printf("Retrieved: %s\n", string(result))
}

TTL (Time To Live) Operations

// Set entries with different expiration times
cache.Set([]byte("short-lived"), []byte("data1"), 10)  // 10 seconds
cache.Set([]byte("long-lived"), []byte("data2"), 3600) // 1 hour
cache.Set([]byte("permanent"), []byte("data3"), 0)     // Permanent (until GC)

// Get TTL
ttl, err := cache.TTL([]byte("short-lived"))
if err == nil {
    fmt.Printf("Remaining TTL: %d seconds\n", ttl)
}

// Delete entry
affected := cache.Del([]byte("short-lived"))
fmt.Printf("Deleted entries: %d\n", affected)

Cache Statistics Checking

// Check entry count
entryCount := cache.EntryCount()
fmt.Printf("Current entries: %d\n", entryCount)

// Average entry size
if entryCount > 0 {
    avgSize := cache.AverageEntrySize()
    fmt.Printf("Average entry size: %.2f bytes\n", avgSize)
}

// Calculate hit rate
hitCount := cache.HitCount()
missCount := cache.MissCount()
totalRequests := hitCount + missCount
if totalRequests > 0 {
    hitRate := float64(hitCount) / float64(totalRequests) * 100
    fmt.Printf("Hit rate: %.2f%%\n", hitRate)
}

Advanced Operations

// Check if value exists (without retrieving data)
exists := cache.Get([]byte("key")) != nil

// Complete cache clear
cache.Clear()

// Update entry (only if exists)
key := []byte("update-key")
if cache.Get(key) != nil {
    cache.Set(key, []byte("new-value"), 300)
}

// Batch operations example
keys := []string{"batch1", "batch2", "batch3"}
for i, k := range keys {
    key := []byte(k)
    value := []byte(fmt.Sprintf("value-%d", i))
    cache.Set(key, value, 600)
}

Error Handling

key := []byte("test-key")
value := []byte("test-value")

// Set operation error handling
err := cache.Set(key, value, 60)
switch err {
case nil:
    fmt.Println("Set successful")
case freecache.ErrLargeKey:
    fmt.Println("Key too large")
case freecache.ErrLargeEntry:
    fmt.Println("Entry too large")
default:
    fmt.Printf("Unexpected error: %v\n", err)
}

// Get operation error handling
result, err := cache.Get(key)
switch err {
case nil:
    fmt.Printf("Found: %s\n", string(result))
case freecache.ErrNotFound:
    fmt.Println("Key not found")
default:
    fmt.Printf("Get error: %v\n", err)
}

Configuration Optimization

// Create caches with different sizes
smallCache := freecache.NewCache(1024 * 1024)      // 1MB
mediumCache := freecache.NewCache(50 * 1024 * 1024) // 50MB
largeCache := freecache.NewCache(500 * 1024 * 1024) // 500MB

// GC adjustment based on memory usage
if cacheSize >= 100*1024*1024 { // 100MB or more
    debug.SetGCPercent(10) // Lower frequency
} else {
    debug.SetGCPercent(20) // Standard frequency
}