Cached

Cache LibraryPythonDecoratorMemoizationMemory Cachefunctools

Cache Library

Cached

Overview

Cached refers to the memoization functionality included in Python's standard library functools module, which caches function results to avoid redundant calculations.

Details

Cached refers to the lru_cache (and cache in Python 3.9+) decorators provided by Python's standard library functools module, which implement memoization by storing function call results in memory. lru_cache uses the LRU (Least Recently Used) algorithm to manage cache size, automatically removing least recently used items. It can be easily applied to functions using decorator syntax and is particularly effective for speeding up Fibonacci sequence calculations and heavy computational processes. It only supports hashable arguments, so mutable objects like dictionaries and lists cannot be cached directly. The cache_info() method allows checking hit/miss statistics, and cache_clear() clears the cache. Being part of the standard library, it requires no additional installation, is lightweight and reliable, and is widely used in many Python applications. Python 3.9+ also provides the cache decorator for unlimited caching.

Advantages and Disadvantages

Advantages

  • Standard Library: Ready to use without additional installation
  • Simple API: Easy to apply with decorator syntax
  • High Performance: Fast operation with C implementation
  • Statistics Feature: Easy to check hit/miss rates
  • Memory Management: Automatic memory management with LRU algorithm
  • Reliability: Well-tested as part of Python standard library
  • Lightweight: Minimal overhead

Disadvantages

  • Hashable Types Only: Cannot directly cache some objects like dictionaries and lists
  • Memory Cache Only: No persistence or inter-process sharing
  • Method Caching Issues: Instance method caching may prevent garbage collection
  • Single Process: No distributed cache or multiprocess support
  • No TTL: Time-based expiration is not supported
  • Limited Customization: Difficult to configure advanced cache policies

Key Links

Code Examples

Basic LRU Cache

from functools import lru_cache

@lru_cache(maxsize=128)
def fibonacci(n):
    """Cache Fibonacci sequence calculation"""
    if n <= 1:
        return n
    return fibonacci(n-1) + fibonacci(n-2)

# Usage example
print(fibonacci(100))  # Calculated first time, cached afterwards
print(fibonacci.cache_info())  # CacheInfo(hits=98, misses=101, maxsize=128, currsize=101)

Unlimited Cache (Python 3.9+)

from functools import cache

@cache
def expensive_calculation(x, y):
    """Heavy computation process"""
    print(f"Calculating: {x} + {y}")
    time.sleep(1)  # Simulate heavy processing
    return x + y

# Usage example
print(expensive_calculation(1, 2))  # Calculating: 1 + 2
print(expensive_calculation(1, 2))  # Returns immediately from cache

Utilizing Cache Statistics

@lru_cache(maxsize=64)
def get_user_data(user_id):
    """Get user data (simulate database access)"""
    print(f"Fetching user {user_id} from database")
    return {"id": user_id, "name": f"User {user_id}"}

# Multiple accesses
for user_id in [1, 2, 1, 3, 2, 1]:
    user = get_user_data(user_id)
    print(f"Retrieved: {user}")

# Check cache statistics
info = get_user_data.cache_info()
print(f"Hit rate: {info.hits / (info.hits + info.misses):.2%}")
print(f"Cache size: {info.currsize}/{info.maxsize}")

Type-Hinted Cache

from functools import lru_cache
from typing import Dict, Any

@lru_cache(maxsize=256)
def fetch_config(environment: str) -> Dict[str, Any]:
    """Fetch environment configuration"""
    config_map = {
        "development": {"debug": True, "db_host": "localhost"},
        "production": {"debug": False, "db_host": "prod.example.com"},
        "testing": {"debug": True, "db_host": "test.example.com"}
    }
    return config_map.get(environment, {})

# Usage example
dev_config = fetch_config("development")
prod_config = fetch_config("production")

Custom Cache Implementation

def simple_cache(func):
    """Simple cache decorator implementation example"""
    cache = {}
    
    def wrapper(*args, **kwargs):
        # Generate key (hash arguments)
        key = str(args) + str(sorted(kwargs.items()))
        
        if key in cache:
            return cache[key]
        
        # Calculate result and store in cache
        result = func(*args, **kwargs)
        cache[key] = result
        return result
    
    # Add cache clear functionality
    wrapper.cache_clear = lambda: cache.clear()
    wrapper.cache_size = lambda: len(cache)
    
    return wrapper

@simple_cache
def multiply(a, b):
    print(f"Calculating: {a} * {b}")
    return a * b

# Usage example
print(multiply(3, 4))  # Calculating: 3 * 4
print(multiply(3, 4))  # Returns from cache
print(f"Cache size: {multiply.cache_size()}")

Conditional Cache

from functools import lru_cache
import time

def conditional_cache(func):
    """Switch cache usage based on conditions"""
    cached_func = lru_cache(maxsize=128)(func)
    
    def wrapper(*args, **kwargs):
        # Invalidate cache under specific conditions
        if kwargs.get('force_refresh', False):
            # Bypass cache and execute directly
            kwargs.pop('force_refresh', None)
            return func(*args, **kwargs)
        return cached_func(*args, **kwargs)
    
    wrapper.cache_info = cached_func.cache_info
    wrapper.cache_clear = cached_func.cache_clear
    return wrapper

@conditional_cache
def get_current_time():
    return time.time()

# Normally cached
print(get_current_time())
print(get_current_time())  # Same value

# Force refresh
print(get_current_time(force_refresh=True))  # New value