Database

Memcached

Overview

Memcached is a high-performance, memory-based key-value cache system designed to reduce database load and improve web application response times. It features a simple and lightweight implementation that makes it easy to deploy and maintain.

In dynamic web applications, Memcached caches computationally expensive data such as database query results and API responses in memory, achieving significant performance improvements and reducing backend load.

Details

  • Development: Created by Brad Fitzpatrick in 2003
  • Architecture: Multi-threaded, event-driven server
  • Data Storage: Memory-only (no persistence)
  • Protocol: Simple text-based protocol
  • Distribution: No server-side distribution (client-side hash distribution)
  • Replication: None (focused on simple caching use cases)
  • Proxy Feature: Built-in proxy support since version 1.6
  • SASL Authentication: Security features supported
  • LRU: Automatic data eviction when memory is full
  • TTL: Automatic expiration functionality

Advantages and Disadvantages

Advantages

  • High Performance: Very fast read/write operations due to memory-based storage
  • Simplicity: Easy-to-understand mechanism and lightweight implementation
  • Stability: Long-term track record and stable operation
  • Scalability: Easy horizontal scaling
  • Multi-language Support: Rich ecosystem of client libraries
  • Low Latency: Sub-millisecond response times

Disadvantages

  • No Persistence: Data is lost on server restart
  • Simple Structure: Cannot handle complex data structures
  • No Replication: High availability must be implemented externally
  • Memory Limitation: Dependent on available memory
  • Limited Atomic Operations: Complex transactions are not possible

Key Links

Code Examples

Setup and Configuration

# Installation on Ubuntu/Debian
sudo apt-get install memcached

# Installation on CentOS/RHEL
sudo yum install memcached

# Start server (basic configuration)
memcached -m 64 -p 11211 -u memcache -d

# Start with detailed configuration
memcached -m 512 -p 11211 -l 127.0.0.1 -d -v

Basic Operations (CRUD)

# Basic operations using Python (python-memcached)
import memcache

# Connection setup
mc = memcache.Client(['127.0.0.1:11211'], debug=0)

# Store data (SET)
mc.set("user:1001", {"name": "John", "age": 30}, time=3600)
mc.set("counter", 100, time=300)

# Retrieve data (GET)
user_data = mc.get("user:1001")
counter = mc.get("counter")

# Update data
mc.set("user:1001", {"name": "John", "age": 31}, time=3600)

# Delete data
mc.delete("user:1001")

Advanced Operations

# Bulk operations with multiple keys
keys = ["user:1001", "user:1002", "user:1003"]
users = mc.get_multi(keys)

# Atomic increment/decrement operations
mc.incr("page_views", delta=1)
mc.decr("remaining_count", delta=5)

# Conditional store (only if key doesn't exist)
success = mc.add("lock:resource:123", "locked", time=30)

# Replace operation (only if key exists)
mc.replace("config:cache_timeout", 600, time=7200)

Proxy Feature (Memcached 1.6+)

-- Proxy configuration file (config.lua)
pools{
    main = {
        backends = {
            "127.0.0.1:11214",
            "127.0.0.1:11215",
        }
    }
}

routes{
    default = route_direct{ child = "main" }
}
# Start in proxy mode
memcached -o proxy_config=routelib.lua,proxy_arg=config.lua -p 11212 &

Practical Example

// Node.js implementation example
const memcached = require('memcached');
const mc = new memcached('127.0.0.1:11211');

// Cache strategy implementation
async function getUserProfile(userId) {
    const cacheKey = `user_profile:${userId}`;
    
    // Try to get from cache
    return new Promise((resolve, reject) => {
        mc.get(cacheKey, async (err, data) => {
            if (err) return reject(err);
            
            if (data) {
                // Cache hit
                console.log('Cache hit');
                resolve(data);
            } else {
                // Cache miss: fetch from database
                console.log('Cache miss');
                const profileData = await fetchUserFromDatabase(userId);
                
                // Store in cache for 1 hour
                mc.set(cacheKey, profileData, 3600, (setErr) => {
                    if (setErr) console.error('Cache set error:', setErr);
                });
                
                resolve(profileData);
            }
        });
    });
}

Best Practices

# Connection pooling and error handling
import memcache
import json
import logging

class MemcachedManager:
    def __init__(self, servers=['127.0.0.1:11211']):
        self.mc = memcache.Client(servers, debug=0)
        self.logger = logging.getLogger(__name__)
    
    def safe_get(self, key, default=None):
        try:
            return self.mc.get(key) or default
        except Exception as e:
            self.logger.error(f"Memcached get error: {e}")
            return default
    
    def safe_set(self, key, value, ttl=3600):
        try:
            return self.mc.set(key, value, time=ttl)
        except Exception as e:
            self.logger.error(f"Memcached set error: {e}")
            return False
    
    def cache_with_fallback(self, key, fetch_func, ttl=3600):
        # Try to get from cache
        data = self.safe_get(key)
        if data is not None:
            return data
        
        # Fallback: fetch original data
        try:
            fresh_data = fetch_func()
            self.safe_set(key, fresh_data, ttl)
            return fresh_data
        except Exception as e:
            self.logger.error(f"Fallback fetch error: {e}")
            raise

CLI Operations

# Check statistics
echo "stats" | nc 127.0.0.1 11211

# Check item counts
echo "stats items" | nc 127.0.0.1 11211

# Check server information
echo "stats settings" | nc 127.0.0.1 11211

# Delete specific key
echo "delete user:1001" | nc 127.0.0.1 11211

# Flush cache (delete all)
echo "flush_all" | nc 127.0.0.1 11211