Moka

Cache LibraryRustHigh PerformanceSync/AsyncMemory CacheConcurrency

GitHub Overview

moka-rs/moka

A high performance concurrent caching library for Rust

Stars2,160
Watchers10
Forks97
Created:April 5, 2020
Language:Rust
License:Apache License 2.0

Topics

cacheconcurrent-data-structure

Star History

moka-rs/moka Star History
Data as of: 10/22/2025, 08:07 AM

Cache Library

Moka

Overview

Moka is a high-performance concurrent caching library for Rust that provides both synchronous (sync) and asynchronous (future) APIs for safe and efficient memory caching in multi-threaded environments. Inspired by Java's Caffeine cache design, it features approximate LFU (Least Frequently Used) eviction policies, size-aware eviction with custom weighers, and TTL (Time to Live) functionality.

Details

Moka is designed based on modern Rust concurrency best practices. The sync API is optimized for traditional thread-based concurrency, while the future API integrates seamlessly with async runtimes like Tokio and async-std.

Key technical features:

  • Approximate LFU eviction: Efficient key removal based on access frequency
  • Size-aware eviction: Capacity control with custom weigher functions
  • TTL/TTI capabilities: Time-based automatic expiration
  • Eviction listeners: Custom callbacks for key removal events
  • Thread-safe: Lightweight cloning via Arc and Clone traits

Technical advantages:

  • Lock-free design: Achieves high concurrent performance
  • Zero-cost abstractions: Maximizes Rust's performance benefits
  • Type safety: Compile-time safety guarantees

Pros and Cons

Pros

  • High performance: Performance equal to or better than Java Caffeine
  • Type safety: Safety through Rust's type system
  • Dual APIs: Complete support for both sync and async environments
  • Lightweight: Efficient memory usage with Arc
  • Rich features: TTL, eviction listeners, custom weighers
  • Proven design: Based on Java Caffeine's battle-tested patterns
  • Ecosystem integration: Seamless integration with Tokio/async-std

Cons

  • Rust-only: Not available for other languages
  • Learning curve: Requires understanding of Rust's ownership system
  • Configuration complexity: Advanced features require detailed setup
  • No distribution: Single-process caching only
  • Memory-only: No persistence capabilities

Reference Links

Usage Examples

Basic Synchronous Cache

use moka::sync::Cache;

fn main() {
    // Create cache that can store up to 10,000 entries
    let cache = Cache::new(10_000);
    
    // Insert a value
    cache.insert("key1", "value1");
    
    // Get a value
    if let Some(value) = cache.get(&"key1") {
        println!("Found: {}", value);
    }
    
    // Invalidate a key
    cache.invalidate(&"key1");
}

Asynchronous Cache

use moka::future::Cache;

#[tokio::main]
async fn main() {
    let cache = Cache::new(10_000);
    
    // Insert value asynchronously
    cache.insert("key1", "value1").await;
    
    // Get value asynchronously
    if let Some(value) = cache.get(&"key1").await {
        println!("Found: {}", value);
    }
    
    // Invalidate key asynchronously
    cache.invalidate(&"key1").await;
}

Multi-threaded Synchronous Cache

use moka::sync::Cache;
use std::thread;

fn main() {
    let cache = Cache::new(10_000);
    
    // Share cache across multiple threads
    let threads: Vec<_> = (0..16)
        .map(|i| {
            let my_cache = cache.clone(); // Lightweight clone
            thread::spawn(move || {
                let key = format!("key{}", i);
                let value = format!("value{}", i);
                
                my_cache.insert(key.clone(), value.clone());
                assert_eq!(my_cache.get(&key), Some(value));
            })
        })
        .collect();
    
    for handle in threads {
        handle.join().unwrap();
    }
}

Size-aware Eviction

use moka::sync::Cache;

fn main() {
    let cache = Cache::builder()
        // Calculate size based on string length
        .weigher(|_key, value: &String| -> u32 {
            value.len().try_into().unwrap_or(u32::MAX)
        })
        // Hold up to 32MiB of values
        .max_capacity(32 * 1024 * 1024)
        .build();
    
    cache.insert("small", "a".to_string());
    cache.insert("large", "a".repeat(1024 * 1024)); // 1MiB
}

TTL (Time to Live) Configuration

use moka::sync::Cache;
use std::time::Duration;

fn main() {
    let cache = Cache::builder()
        .max_capacity(10_000)
        .time_to_live(Duration::from_secs(30)) // Expire after 30 seconds
        .time_to_idle(Duration::from_secs(5))  // Expire after 5 seconds of inactivity
        .build();
    
    cache.insert("temporary", "data");
    
    // Auto-removed after 30 seconds or 5 seconds of inactivity
}

Eviction Listener

use moka::sync::Cache;

fn main() {
    let cache = Cache::builder()
        .max_capacity(100)
        .eviction_listener(|key, _value, cause| {
            println!("Evicted key '{}', cause: {:?}", key, cause);
        })
        .build();
    
    // Eviction listener will be called when cache becomes full
    for i in 0..150 {
        cache.insert(format!("key{}", i), format!("value{}", i));
    }
}

Clone Optimization with Arc

use moka::sync::Cache;
use std::sync::Arc;

fn main() {
    let cache: Cache<String, Arc<Vec<u8>>> = Cache::new(1000);
    
    // Wrap large data with Arc
    let large_data = Arc::new(vec![0u8; 2 * 1024 * 1024]); // 2MB
    
    cache.insert("large_key".to_string(), Arc::clone(&large_data));
    
    // get() only performs cheap Arc::clone()
    if let Some(data) = cache.get(&"large_key".to_string()) {
        println!("Data size: {} bytes", data.len());
    }
}