Local LRU
Cache Library
Local LRU
Overview
Local LRU is a local LRU (Least Recently Used) cache library for Node.js applications.
Details
Local LRU is a LRU (Least Recently Used) cache library designed as an in-memory caching solution for Node.js environments. The LRU algorithm efficiently manages cache storage by automatically removing the least recently used items when the cache reaches capacity, making room for new data. This library is optimized for fast get operations and minimal eviction time, designed with the assumption that you'll be caching costly operations that should be performed as infrequently as possible. It provides full TypeScript support for type safety, rich features including TTL (Time To Live) functionality for automatic item expiration, size-based limitations, and custom disposal handlers. Its memory-efficient implementation makes it suitable for use in various Node.js environments including web applications, API servers, and microservices. The library supports both ES6 modules and CommonJS, and offers flexibility for use in browser environments as well. Advanced features include stale-while-revalidate patterns, automatic fetching capabilities, and comprehensive cache statistics tracking.
Pros and Cons
Pros
- High Performance: Fast access optimized for get operations
- Automatic Memory Management: Efficient memory usage via LRU algorithm
- TypeScript Support: Complete type definitions for enhanced development experience
- TTL Functionality: Automatic item expiration support
- Flexible Configuration: Configurable size, TTL, and custom disposal handlers
- Lightweight: Minimal dependencies and compact size
- Multi-Platform: Supports both Node.js and browser environments
Cons
- Memory Limitations: In-memory nature makes it unsuitable for large datasets
- No Persistence: Data is lost on process restart
- Single Process: Cannot share cache between multiple processes
- Set Operation Cost: Set operations are slightly heavier than get operations
- Feature Overhead: Performance impact when using TTL or size tracking features
Key Links
- lru-cache npm package
- GitHub Repository
- API Documentation
- TypeScript Type Definitions
- npm Package Comparison
Code Examples
Basic LRU Cache
import { LRUCache } from 'lru-cache'
// Create basic cache
const cache = new LRUCache({
max: 500, // Maximum 500 items
ttl: 1000 * 60 * 5 // 5 minutes TTL
})
// Set values
cache.set('user:123', { name: 'Alice', age: 30 })
cache.set('user:456', { name: 'Bob', age: 25 })
// Get values
const user = cache.get('user:123')
console.log(user) // { name: 'Alice', age: 30 }
// Check existence
if (cache.has('user:123')) {
console.log('User cache exists')
}
// Delete cache entry
cache.delete('user:456')
// Clear all
cache.clear()
Type-Safe Usage with TypeScript
interface User {
id: number
name: string
email: string
lastLogin: Date
}
interface CacheStats {
hits: number
misses: number
sets: number
}
// Type-safe cache
const userCache = new LRUCache<string, User>({
max: 1000,
ttl: 1000 * 60 * 30, // 30 minutes
updateAgeOnGet: true, // Update age on access
updateAgeOnHas: false
})
// Cache user data
function cacheUser(user: User): void {
userCache.set(`user:${user.id}`, user)
}
// Retrieve cached user data
function getCachedUser(userId: number): User | undefined {
return userCache.get(`user:${userId}`)
}
// Usage example
const user: User = {
id: 123,
name: 'Alice Smith',
email: '[email protected]',
lastLogin: new Date()
}
cacheUser(user)
const cachedUser = getCachedUser(123)
Size Limits and Custom Disposal
const cache = new LRUCache({
max: 100,
maxSize: 5000, // Maximum size in bytes
sizeCalculation: (value, key) => {
// Custom size calculation
return JSON.stringify(value).length + key.length
},
dispose: (value, key, reason) => {
// Callback on item disposal
console.log(`Disposed: ${key}, reason: ${reason}`)
if (value.cleanup) {
value.cleanup()
}
}
})
// Cache with size consideration
cache.set('large-data', {
data: new Array(1000).fill('sample'),
cleanup: () => console.log('Resource cleanup')
})
Async Data Fetching with Cache
class APICache {
constructor() {
this.cache = new LRUCache({
max: 200,
ttl: 1000 * 60 * 10, // 10 minutes
allowStale: false,
updateAgeOnGet: true,
updateAgeOnHas: true
})
}
async getUserData(userId) {
const cacheKey = `user:${userId}`
// Try to get from cache
let userData = this.cache.get(cacheKey)
if (userData) {
console.log('Cache hit')
return userData
}
// Cache miss - fetch from API
console.log('Cache miss - fetching from API')
try {
const response = await fetch(`/api/users/${userId}`)
userData = await response.json()
// Store result in cache
this.cache.set(cacheKey, userData)
return userData
} catch (error) {
console.error('API error:', error)
throw error
}
}
// Get cache statistics
getStats() {
return {
size: this.cache.size,
calculatedSize: this.cache.calculatedSize,
remainingTTL: this.cache.getRemainingTTL('user:123')
}
}
}
// Usage example
const apiCache = new APICache()
async function example() {
try {
const user1 = await apiCache.getUserData(123) // API call
const user2 = await apiCache.getUserData(123) // From cache
console.log('Cache stats:', apiCache.getStats())
} catch (error) {
console.error('Error:', error)
}
}
Express.js Middleware Usage
import express from 'express'
import { LRUCache } from 'lru-cache'
const app = express()
// Response cache middleware
function createCacheMiddleware(options = {}) {
const cache = new LRUCache({
max: 500,
ttl: 1000 * 60 * 5, // 5 minutes
...options
})
return (req, res, next) => {
const key = req.originalUrl || req.url
const cachedResponse = cache.get(key)
if (cachedResponse) {
console.log(`Cache hit: ${key}`)
return res.json(cachedResponse)
}
// Intercept response
const originalJson = res.json
res.json = function(data) {
// Store response in cache
cache.set(key, data)
console.log(`Cached response: ${key}`)
return originalJson.call(this, data)
}
next()
}
}
// Apply middleware
app.use('/api/users', createCacheMiddleware({ ttl: 1000 * 60 * 10 }))
app.get('/api/users/:id', (req, res) => {
// Simulate heavy processing
setTimeout(() => {
res.json({
id: req.params.id,
name: `User ${req.params.id}`,
timestamp: new Date().toISOString()
})
}, 1000)
})
Memory-Efficient Configuration
// Recommended production settings
const productionCache = new LRUCache({
max: 10000, // Adjust based on application needs
ttl: 1000 * 60 * 60, // 1 hour
allowStale: true, // Allow stale data beyond TTL
updateAgeOnGet: false, // Performance-focused
updateAgeOnHas: false,
fetchMethod: async (key, staleValue, { options, signal }) => {
// Auto-fetch functionality
if (staleValue && !options.forceRefresh) {
return staleValue
}
// Fetch fresh data
const freshData = await fetchDataFromAPI(key, { signal })
return freshData
}
})
// Get data with auto-fetch
async function getDataWithAutoFetch(key) {
try {
return await productionCache.fetch(key)
} catch (error) {
console.error('Data fetch error:', error)
return null
}
}
Advanced Stale-While-Revalidate Pattern
const cache = new LRUCache({
max: 1000,
ttl: 1000 * 60 * 5, // 5 minutes
allowStale: true,
fetchMethod: async (key, staleValue, { options, signal, context }) => {
// Return stale value immediately while fetching fresh data
if (staleValue && !options.forceRefresh) {
// Async background refresh
setImmediate(async () => {
try {
const freshData = await fetchDataFromAPI(key, { signal })
cache.set(key, freshData)
} catch (error) {
console.error('Background refresh failed:', error)
}
})
return staleValue
}
// No stale value available, fetch synchronously
return await fetchDataFromAPI(key, { signal })
}
})
// Usage with stale-while-revalidate
async function getDataSWR(key) {
try {
return await cache.fetch(key, {
allowStale: true,
updateAgeOnGet: true
})
} catch (error) {
console.error('SWR fetch error:', error)
return null
}
}