Mini Moka

RustCache LibraryLRUConcurrencyIn-Memory

GitHub Overview

moka-rs/mini-moka

A simple concurrent caching library that might fit to many use cases

Stars137
Watchers2
Forks9
Created:July 9, 2022
Language:Rust
License:Apache License 2.0

Topics

None

Star History

moka-rs/mini-moka Star History
Data as of: 10/22/2025, 08:07 AM

Cache Library

Mini Moka

Overview

Mini Moka is a fast, concurrent in-memory cache library for Rust applications.

Details

Mini Moka is an in-memory cache library for Rust designed as a lightweight edition of the popular Moka cache library. Inspired by Java's renowned Caffeine library architecture, it provides cache implementations built on top of hash maps. It supports both thread-safe concurrent caches and non-thread-safe caches for single-threaded applications. The library achieves full concurrency for retrieval operations and high expected concurrency for update operations, with an efficient implementation based on DashMap. By combining cache admission control via LFU (Least Frequently Used) policy and eviction control via LRU (Least Recently Used) policy, it maintains near-optimal hit ratios. The library offers flexible cache policies including capacity limits based on entry count or weighted size, expiration management through TTL (Time To Live) and TTI (Time To Idle), and comprehensive configuration options through CacheBuilder. Its Rust-optimized implementation leverages the language's safety features while delivering high performance for concurrent access patterns commonly found in modern applications.

Pros and Cons

Pros

  • High Performance: Fast cache implementation optimized for concurrent access
  • Thread-Safe: Safe sharing across multiple threads
  • Lightweight: Minimal functionality with high performance as Moka's light edition
  • Flexible Configuration: Detailed customization via CacheBuilder
  • Rust-Optimized: Safe and efficient implementation leveraging Rust language features
  • Rich Eviction Policies: Optimized combination of LRU and LFU algorithms
  • Expiration Management: Support for both TTL and TTI policies

Cons

  • Memory Limitations: In-memory nature makes it unsuitable for large datasets
  • No Persistence: Data is lost when process terminates
  • Rust-Only: Cannot be used in other programming languages
  • Learning Curve: Requires understanding both Rust ownership system and cache concepts
  • Configuration Complexity: Advanced features may lead to complex configurations

Key Links

Code Examples

Basic Cache Usage

use mini_moka::sync::Cache;

fn main() {
    // Create a cache that can store up to 10,000 entries
    let cache = Cache::new(10_000);

    // Insert values
    cache.insert("key1", "value1");
    cache.insert("key2", "value2");

    // Retrieve values
    if let Some(value) = cache.get("key1") {
        println!("Found: {}", value);
    }

    // Check existence
    if cache.contains_key("key2") {
        println!("Key exists");
    }

    // Remove key
    cache.remove("key1");

    // Check cache size
    println!("Cache size: {}", cache.entry_count());
}

Concurrent Access Example

use mini_moka::sync::Cache;
use std::sync::Arc;
use std::thread;

fn main() {
    const NUM_THREADS: usize = 16;
    const NUM_KEYS_PER_THREAD: usize = 64;

    // Create shared cache
    let cache = Cache::new(10_000);

    // Concurrent access from multiple threads
    let handles: Vec<_> = (0..NUM_THREADS)
        .map(|thread_id| {
            // Clone cache (lightweight operation)
            let cache = cache.clone();
            
            thread::spawn(move || {
                // Read and write keys in each thread
                for key_id in 0..NUM_KEYS_PER_THREAD {
                    let key = format!("key-{}-{}", thread_id, key_id);
                    let value = format!("value-{}-{}", thread_id, key_id);
                    
                    // Write
                    cache.insert(key.clone(), value.clone());
                    
                    // Read
                    if let Some(cached_value) = cache.get(&key) {
                        assert_eq!(cached_value, value);
                    }
                }
            })
        })
        .collect();

    // Wait for all threads to complete
    for handle in handles {
        handle.join().unwrap();
    }

    println!("Final cache size: {}", cache.entry_count());
}

Detailed Configuration with CacheBuilder

use mini_moka::sync::{Cache, CacheBuilder};
use std::time::Duration;

fn main() {
    // Detailed configuration with CacheBuilder
    let cache: Cache<String, String> = CacheBuilder::new(1_000)
        .time_to_live(Duration::from_secs(300))    // 5 minutes TTL
        .time_to_idle(Duration::from_secs(60))     // 1 minute TTI
        .initial_capacity(100)                      // Initial capacity
        .build();

    // Insert data
    cache.insert("user:123".to_string(), "Alice".to_string());
    cache.insert("user:456".to_string(), "Bob".to_string());

    // Retrieve before TTL/TTI expiration
    if let Some(user) = cache.get("user:123") {
        println!("User: {}", user);
    }

    // Get cache statistics
    println!("Hit count: {}", cache.hit_count());
    println!("Miss count: {}", cache.miss_count());
    println!("Hit ratio: {:.2}", cache.hit_ratio());
}

Weighted Cache

use mini_moka::sync::{Cache, CacheBuilder};

#[derive(Clone)]
struct DataItem {
    data: Vec<u8>,
    metadata: String,
}

impl DataItem {
    fn size(&self) -> u32 {
        (self.data.len() + self.metadata.len()) as u32
    }
}

fn main() {
    // Weighted cache (total size limit)
    let cache: Cache<String, DataItem> = CacheBuilder::new(1000)
        .weigher(|_key, value| value.size()) // Custom weight calculation
        .build();

    // Store large data items
    let large_item = DataItem {
        data: vec![0u8; 1024], // 1KB
        metadata: "Large data item".to_string(),
    };

    let small_item = DataItem {
        data: vec![0u8; 100], // 100B
        metadata: "Small data item".to_string(),
    };

    cache.insert("large".to_string(), large_item);
    cache.insert("small".to_string(), small_item);

    // Check weighted size
    println!("Weighted size: {}", cache.weighted_size());
}

Async Version (with future feature)

use mini_moka::future::Cache;
use std::time::Duration;

#[tokio::main]
async fn main() {
    // Create async cache
    let cache = Cache::builder()
        .time_to_live(Duration::from_secs(300))
        .max_capacity(1000)
        .build();

    // Async data insertion and retrieval
    cache.insert("async_key", "async_value").await;

    if let Some(value) = cache.get("async_key").await {
        println!("Async value: {}", value);
    }

    // Wait until expiration
    tokio::time::sleep(Duration::from_secs(301)).await;

    // Cache cleanup (remove expired items)
    cache.run_pending_tasks().await;

    // Value should not be retrievable after expiration
    assert!(cache.get("async_key").await.is_none());
}

Error Handling and Fallback

use mini_moka::sync::Cache;
use std::collections::HashMap;

struct DataService {
    cache: Cache<String, String>,
    fallback_storage: HashMap<String, String>,
}

impl DataService {
    fn new() -> Self {
        Self {
            cache: Cache::new(1000),
            fallback_storage: HashMap::new(),
        }
    }

    fn get_data(&mut self, key: &str) -> Option<String> {
        // Try cache first
        if let Some(cached_value) = self.cache.get(key) {
            println!("Cache hit for key: {}", key);
            return Some(cached_value);
        }

        // On cache miss, try fallback storage
        if let Some(fallback_value) = self.fallback_storage.get(key) {
            println!("Fallback hit for key: {}", key);
            // Store fallback value in cache
            self.cache.insert(key.to_string(), fallback_value.clone());
            return Some(fallback_value.clone());
        }

        println!("Data not found for key: {}", key);
        None
    }

    fn set_data(&mut self, key: String, value: String) {
        // Store in both cache and fallback storage
        self.cache.insert(key.clone(), value.clone());
        self.fallback_storage.insert(key, value);
    }

    fn invalidate(&self, key: &str) {
        self.cache.remove(key);
    }

    fn cache_stats(&self) {
        println!("Cache entries: {}", self.cache.entry_count());
        println!("Cache hit ratio: {:.2}", self.cache.hit_ratio());
    }
}

fn main() {
    let mut service = DataService::new();

    // Set data
    service.set_data("user:1".to_string(), "Alice".to_string());
    service.set_data("user:2".to_string(), "Bob".to_string());

    // Get data (cache hit)
    service.get_data("user:1");

    // Invalidate cache
    service.invalidate("user:1");

    // Get after invalidation (fallback hit)
    service.get_data("user:1");

    // Display statistics
    service.cache_stats();
}

Custom Expiration Policy

use mini_moka::sync::{Cache, CacheBuilder};
use std::time::{Duration, SystemTime};

#[derive(Clone)]
struct TimestampedValue {
    value: String,
    created_at: SystemTime,
}

impl TimestampedValue {
    fn new(value: String) -> Self {
        Self {
            value,
            created_at: SystemTime::now(),
        }
    }

    fn is_expired(&self, max_age: Duration) -> bool {
        self.created_at.elapsed().unwrap_or(Duration::MAX) > max_age
    }
}

fn main() {
    let cache: Cache<String, TimestampedValue> = CacheBuilder::new(1000)
        .time_to_live(Duration::from_secs(60)) // 1 minute TTL
        .build();

    // Insert timestamped data
    cache.insert("timestamped_key".to_string(), 
                 TimestampedValue::new("timestamped_value".to_string()));

    // Custom expiration check
    let max_age = Duration::from_secs(30);
    
    if let Some(timestamped_value) = cache.get("timestamped_key") {
        if timestamped_value.is_expired(max_age) {
            println!("Value is expired according to custom policy");
            cache.remove("timestamped_key");
        } else {
            println!("Value is still valid: {}", timestamped_value.value);
        }
    }
}

Performance Benchmarking

use mini_moka::sync::Cache;
use std::time::{Duration, Instant};
use std::thread;

fn benchmark_cache_performance() {
    let cache = Cache::new(100_000);
    
    // Benchmark write operations
    let start = Instant::now();
    for i in 0..10_000 {
        cache.insert(format!("key_{}", i), format!("value_{}", i));
    }
    let write_duration = start.elapsed();
    
    // Benchmark read operations
    let start = Instant::now();
    for i in 0..10_000 {
        cache.get(&format!("key_{}", i));
    }
    let read_duration = start.elapsed();
    
    println!("Write 10,000 entries: {:?}", write_duration);
    println!("Read 10,000 entries: {:?}", read_duration);
    println!("Hit ratio: {:.2}", cache.hit_ratio());
    
    // Concurrent benchmark
    let start = Instant::now();
    let handles: Vec<_> = (0..4)
        .map(|thread_id| {
            let cache = cache.clone();
            thread::spawn(move || {
                for i in 0..2_500 {
                    let key = format!("concurrent_key_{}_{}", thread_id, i);
                    cache.insert(key.clone(), format!("value_{}", i));
                    cache.get(&key);
                }
            })
        })
        .collect();
    
    for handle in handles {
        handle.join().unwrap();
    }
    let concurrent_duration = start.elapsed();
    
    println!("Concurrent operations (4 threads): {:?}", concurrent_duration);
}

fn main() {
    benchmark_cache_performance();
}