Memcached

High-performance distributed memory object caching system. Simple key-value store specialized in lightweight operation. Multi-threaded support.

Cache ServerDistributedHigh PerformanceMemoryScalableLightweightKey-ValueIn-Memory

Cache Server

Memcached

Overview

Memcached is a high-performance distributed memory caching system. Designed to reduce database load and accelerate dynamic web applications, it operates as a simple Key-Value store. With its lightweight architecture and proven scalability, Memcached can handle millions of requests per second and serves as the backbone for many high-traffic websites including Facebook, Twitter, and YouTube. Its multi-threaded design efficiently utilizes modern hardware, while the simple protocol ensures minimal overhead and maximum performance.

Details

Memcached 2024 edition continues to be the veteran proven solution for web application acceleration. Its simple design philosophy and high performance make it capable of handling millions of requests per second with exceptional reliability. The multi-threaded architecture, efficient slab allocation memory management, and client-side distributed hashing provide excellent scalability. Operating as a pure in-memory cache without persistence, it focuses on delivering maximum speed with minimal complexity. The UDP protocol support (though often disabled for security) and TCP-based communication ensure flexible deployment options across various network configurations.

Key Features

  • Ultra-High Performance: Multi-threaded architecture for millions of requests/second
  • Distributed Caching: Client-side hash distribution across multiple servers
  • Simple Key-Value Model: String keys with binary value pairs
  • Efficient Memory Management: Slab allocation for optimal memory usage
  • Atomic Operations: Built-in increment/decrement operations
  • Auto-Expiration: TTL (Time To Live) and LRU eviction policies

Pros and Cons

Pros

  • Extremely high performance with simple, proven architecture
  • Easy horizontal scaling through client-side distribution
  • Minimal resource overhead and memory efficiency
  • Wide language support and mature ecosystem
  • No single point of failure in distributed setup
  • Battle-tested reliability in high-traffic production environments

Cons

  • No built-in persistence or data durability
  • Limited data types (only key-value strings)
  • No built-in clustering or replication features
  • Client-side hashing complexity for consistent distribution
  • Memory-only storage means data loss on restart
  • Basic security features requiring external protection

Reference Links

Code Examples

Installation and Basic Setup

# Ubuntu/Debian installation
sudo apt update
sudo apt install memcached libmemcached-tools

# Start service
sudo systemctl start memcached
sudo systemctl enable memcached

# CentOS/RHEL installation
sudo dnf install epel-release
sudo dnf install memcached libmemcached

# Start service
sudo systemctl start memcached
sudo systemctl enable memcached

# Docker deployment
docker run --name memcached-cache \
  -p 11211:11211 \
  -d memcached:1.6-alpine

# Docker with memory limit
docker run --name memcached-cache \
  -p 11211:11211 \
  -d memcached:1.6-alpine memcached -m 64

# Source build
sudo apt-get install build-essential libevent-dev
wget https://memcached.org/latest
tar -zxf memcached-x.x.x.tar.gz
cd memcached-x.x.x
./configure --prefix=/usr/local/memcached
make && make test && sudo make install

Basic Configuration and Startup Options

# Basic startup
memcached -d -m 64 -p 11211 -u memcached

# Detailed options
memcached \
  -d \                    # Daemon mode
  -m 128 \               # Memory usage (MB)
  -p 11211 \             # TCP port
  -U 11211 \             # UDP port (0 to disable)
  -l 127.0.0.1 \         # Listening address
  -u memcached \         # Run as user
  -c 1024 \              # Maximum concurrent connections
  -t 4 \                 # Worker threads
  -f 1.25 \              # Chunk size growth factor
  -n 48 \                # Minimum chunk size
  -v                     # Verbose logging

# Production-recommended configuration
memcached \
  -d \
  -m 512 \
  -p 11211 \
  -U 0 \                 # Disable UDP for security
  -l 127.0.0.1 \
  -u memcached \
  -c 2048 \
  -t 8 \
  -o slab_reassign,slab_automove,lru_crawler,lru_maintainer \
  -o maxconns_fast,hash_algorithm=murmur3

systemd Service Configuration

# /etc/systemd/system/memcached.service
[Unit]
Description=Memcached
After=network.target

[Service]
Type=notify
User=memcached
Group=memcached
ExecStart=/usr/bin/memcached -d -m 128 -p 11211 -u memcached -l 127.0.0.1
Restart=always
RestartSec=5

[Install]
WantedBy=multi-user.target

Basic Operations via Telnet/Netcat

# Connect with telnet
telnet localhost 11211

# Connect with netcat
nc localhost 11211

Basic command examples:

# Set data
set mykey 0 3600 5
hello
STORED

# Get data
get mykey
VALUE mykey 0 5
hello
END

# Get multiple keys
get key1 key2 key3

# Delete data
delete mykey
DELETED

# Clear all data
flush_all
OK

# Conditional operations
# add: Set only if key doesn't exist
add newkey 0 3600 5
world
STORED

# replace: Replace only if key exists
replace mykey 0 3600 7
updated
STORED

# append: Append to existing value
append mykey 0 3600 6
_value
STORED

# prepend: Prepend to existing value
prepend mykey 0 3600 7
prefix_
STORED

# Numeric operations
set counter 0 0 1
0
STORED

# Increment
incr counter 1
1

# Decrement
decr counter 1
0

# Large increment
incr counter 100
100

Statistics and Monitoring

# General statistics
stats
STAT pid 12345
STAT uptime 3600
STAT curr_connections 5
STAT total_connections 100
STAT bytes 1024
STAT curr_items 10
STAT total_items 50
STAT evictions 2
STAT hit_ratio 0.85
END

# Item statistics
stats items
STAT items:1:number 5
STAT items:1:age 1234
STAT items:1:evicted 0
STAT items:1:evicted_nonzero 0
STAT items:1:evicted_time 0
END

# Slab statistics
stats slabs
STAT 1:chunk_size 96
STAT 1:chunks_per_page 10922
STAT 1:total_pages 1
STAT 1:total_chunks 10922
STAT 1:used_chunks 5
STAT 1:free_chunks 10917
STAT 1:get_hits 25
STAT 1:cmd_set 10
END

# Settings verification
stats settings
STAT maxbytes 67108864
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 11211
STAT verbosity 0
STAT num_threads 4
STAT growth_factor 1.25
STAT chunk_size 48
END

# Size distribution (development only)
stats sizes
STAT 96 5
STAT 120 2
STAT 152 1
END

Python Client Implementation (pymemcache)

from pymemcache.client.base import Client
import time

# Single server connection
client = Client(('localhost', 11211))

# Data operations
client.set('key', 'value', expire=3600)
result = client.get('key')
client.delete('key')

# Numeric operations
client.set('counter', 0)
client.incr('counter', 1)
client.decr('counter', 1)

# Batch operations
client.set_many({'key1': 'value1', 'key2': 'value2'}, expire=3600)
values = client.get_many(['key1', 'key2'])

# Advanced operations
try:
    # Conditional set
    success = client.add('unique_key', 'value', expire=3600)
    if success:
        print("Key was added")
    
    # Replace existing
    success = client.replace('existing_key', 'new_value', expire=3600)
    
    # Append to existing value
    client.append('key', '_suffix')
    
    # Get with CAS (Compare and Swap)
    result, cas = client.gets('key')
    if cas:
        success = client.cas('key', 'new_value', cas, expire=3600)
        if success:
            print("CAS operation successful")
    
except Exception as e:
    print(f"Memcached operation failed: {e}")

# Connection pooling
from pymemcache.client.base import PooledClient

pooled_client = PooledClient(
    ('localhost', 11211),
    max_pool_size=100,
    connect_timeout=5,
    timeout=2
)

Distributed Configuration (Python)

from pymemcache.client.hash import HashClient

# Multiple server configuration
servers = [
    ('server1', 11211),
    ('server2', 11211),
    ('server3', 11211)
]

client = HashClient(servers)

# Automatically distributed to appropriate server
client.set('user:1000', user_data)
client.set('session:abc123', session_data)

# Custom distribution implementation
import hashlib
from pymemcache.client.base import Client

class DistributedMemcache:
    def __init__(self, servers):
        self.servers = [Client(server) for server in servers]
        self.server_count = len(self.servers)
    
    def _get_server(self, key):
        hash_value = int(hashlib.md5(key.encode()).hexdigest(), 16)
        return self.servers[hash_value % self.server_count]
    
    def get(self, key):
        server = self._get_server(key)
        return server.get(key)
    
    def set(self, key, value, expire=0):
        server = self._get_server(key)
        return server.set(key, value, expire)

# Usage
servers = [('server1', 11211), ('server2', 11211), ('server3', 11211)]
cache = DistributedMemcache(servers)
cache.set('user:1000', user_data)

Consistent Hashing Implementation

import bisect
import hashlib

class ConsistentHash:
    def __init__(self, servers, replicas=150):
        self.replicas = replicas
        self.ring = {}
        self.sorted_keys = []
        
        for server in servers:
            self.add_server(server)
    
    def _hash(self, key):
        return int(hashlib.md5(key.encode()).hexdigest(), 16)
    
    def add_server(self, server):
        for i in range(self.replicas):
            key = self._hash(f"{server}:{i}")
            self.ring[key] = server
            bisect.insort(self.sorted_keys, key)
    
    def remove_server(self, server):
        for i in range(self.replicas):
            key = self._hash(f"{server}:{i}")
            del self.ring[key]
            self.sorted_keys.remove(key)
    
    def get_server(self, key):
        if not self.ring:
            return None
        
        hash_key = self._hash(key)
        idx = bisect.bisect_right(self.sorted_keys, hash_key)
        if idx == len(self.sorted_keys):
            idx = 0
        return self.ring[self.sorted_keys[idx]]

# Usage example
servers = ['server1:11211', 'server2:11211', 'server3:11211']
ch = ConsistentHash(servers)

# Get server for key
server = ch.get_server('user:12345')
print(f"Key 'user:12345' maps to {server}")

# Handle server addition/removal
ch.add_server('server4:11211')
ch.remove_server('server1:11211')

PHP Implementation

<?php
$memcached = new Memcached();

// Add servers
$memcached->addServer('localhost', 11211);
$memcached->addServer('server2', 11211);

// Set options
$memcached->setOption(Memcached::OPT_DISTRIBUTION, Memcached::DISTRIBUTION_CONSISTENT);
$memcached->setOption(Memcached::OPT_HASH, Memcached::HASH_MD5);
$memcached->setOption(Memcached::OPT_LIBKETAMA_COMPATIBLE, true);

// Data operations
$memcached->set('key', 'value', 3600);
$value = $memcached->get('key');
$memcached->delete('key');

// Batch operations
$items = [
    'key1' => 'value1',
    'key2' => 'value2'
];
$memcached->setMulti($items, 3600);
$values = $memcached->getMulti(['key1', 'key2']);

// Error handling
if ($memcached->getResultCode() !== Memcached::RES_SUCCESS) {
    echo 'Memcached error: ' . $memcached->getResultMessage();
}
?>

Node.js Implementation (memjs)

const memjs = require('memjs');

// Create client
const client = memjs.Client.create('localhost:11211');

// Data operations
await client.set('key', 'value', {expires: 3600});
const value = await client.get('key');
await client.delete('key');

// Conditional operations
await client.add('newkey', 'value');
await client.replace('existingkey', 'newvalue');

// Multiple servers
const client_multi = memjs.Client.create([
    'server1:11211',
    'server2:11211',
    'server3:11211'
]);

// Error handling
try {
    const result = await client.get('nonexistent');
    console.log('Value:', result.value?.toString());
} catch (error) {
    console.error('Memcached error:', error.message);
}

// Batch operations
const keys = ['key1', 'key2', 'key3'];
const results = await Promise.all(
    keys.map(key => client.get(key))
);

Performance Optimization

# Memory tuning - check slab classes
memcached -vv
# slab class   1: chunk size        80 perslab   13107
# slab class   2: chunk size       104 perslab   10082

# Growth factor adjustment (default: 1.25)
memcached -f 1.1  # More granular slab classes
memcached -f 2.0  # Coarser slab classes

# Minimum chunk size adjustment
memcached -n 64   # Start from 64 bytes

# Thread optimization (match CPU cores)
memcached -t $(nproc)

# Connection limit adjustment
memcached -c 4096

# Fast connection processing
memcached -o maxconns_fast

# Advanced optimization
# LRU functionality
memcached -o lru_crawler,lru_maintainer

# Automatic slab management
memcached -o slab_reassign,slab_automove

# High-performance hash algorithm
memcached -o hash_algorithm=murmur3

# Combined optimized settings
memcached -o slab_reassign,slab_automove,lru_crawler,lru_maintainer,maxconns_fast,hash_algorithm=murmur3

Security Configuration

# Network security - localhost only
memcached -l 127.0.0.1

# Specific IP only
memcached -l 192.168.1.100

# Disable UDP (DDoS prevention)
memcached -U 0

# Firewall configuration
sudo ufw allow from 192.168.1.0/24 to any port 11211
sudo ufw deny 11211

# iptables configuration
# Allow specific subnet only
sudo iptables -A INPUT -p tcp --dport 11211 -s 192.168.1.0/24 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 11211 -j DROP

# Rate limiting
sudo iptables -A INPUT -p tcp --dport 11211 -m limit --limit 25/minute --limit-burst 100 -j ACCEPT

Monitoring and Maintenance

#!/bin/bash
# memcached-monitor.sh

MEMCACHED_HOST="localhost"
MEMCACHED_PORT="11211"

# Connection test
if ! echo "version" | nc -w1 $MEMCACHED_HOST $MEMCACHED_PORT > /dev/null 2>&1; then
    echo "ERROR: Cannot connect to Memcached"
    exit 1
fi

# Get statistics
STATS=$(echo "stats" | nc -w1 $MEMCACHED_HOST $MEMCACHED_PORT)

# Calculate hit rate
GET_HITS=$(echo "$STATS" | grep "STAT get_hits" | awk '{print $3}')
GET_MISSES=$(echo "$STATS" | grep "STAT get_misses" | awk '{print $3}')
TOTAL_GETS=$((GET_HITS + GET_MISSES))

if [ $TOTAL_GETS -gt 0 ]; then
    HIT_RATE=$(echo "scale=2; $GET_HITS * 100 / $TOTAL_GETS" | bc)
    echo "Hit Rate: ${HIT_RATE}%"
fi

# Memory usage
BYTES=$(echo "$STATS" | grep "STAT bytes " | awk '{print $3}')
LIMIT_MAXBYTES=$(echo "$STATS" | grep "STAT limit_maxbytes" | awk '{print $3}')
MEMORY_USAGE=$(echo "scale=2; $BYTES * 100 / $LIMIT_MAXBYTES" | bc)
echo "Memory Usage: ${MEMORY_USAGE}%"

# Check evictions
EVICTIONS=$(echo "$STATS" | grep "STAT evictions" | awk '{print $3}')
echo "Evictions: $EVICTIONS"

Best Practices and Error Handling

import logging
from pymemcache.client.base import Client
from pymemcache.exceptions import MemcacheError

# Configure logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

class MemcachedWrapper:
    def __init__(self, servers, **kwargs):
        if isinstance(servers, str):
            servers = [servers]
        
        self.clients = [Client(server, **kwargs) for server in servers]
        self.server_count = len(self.clients)
    
    def _get_client(self, key):
        hash_value = hash(key) % self.server_count
        return self.clients[hash_value]
    
    def safe_get(self, key, default=None):
        try:
            client = self._get_client(key)
            result = client.get(key)
            return result if result is not None else default
        except MemcacheError as e:
            logger.warning(f"Cache get failed for key {key}: {e}")
            return default
    
    def safe_set(self, key, value, expire=3600):
        try:
            client = self._get_client(key)
            return client.set(key, value, expire)
        except MemcacheError as e:
            logger.warning(f"Cache set failed for key {key}: {e}")
            return False
    
    def safe_delete(self, key):
        try:
            client = self._get_client(key)
            return client.delete(key)
        except MemcacheError as e:
            logger.warning(f"Cache delete failed for key {key}: {e}")
            return False

# Key design best practices
class CacheKeys:
    @staticmethod
    def user_profile(user_id):
        return f"user:profile:{user_id}"
    
    @staticmethod
    def session(session_id):
        return f"session:{session_id}"
    
    @staticmethod
    def article_cache(article_id, version=1):
        return f"v{version}:article:{article_id}"
    
    @staticmethod
    def search_results(query, page=1):
        # URL-safe key encoding
        import urllib.parse
        safe_query = urllib.parse.quote(query.encode('utf-8'))
        return f"search:{safe_query}:page:{page}"

# Usage example
cache = MemcachedWrapper([
    ('server1', 11211),
    ('server2', 11211)
])

# TTL constants
SHORT_TTL = 300    # 5 minutes
MEDIUM_TTL = 3600  # 1 hour
LONG_TTL = 86400   # 24 hours

# Cache usage patterns
user_key = CacheKeys.user_profile(12345)
user_data = cache.safe_get(user_key)

if user_data is None:
    # Cache miss - fetch from database
    user_data = fetch_user_from_db(12345)
    cache.safe_set(user_key, user_data, MEDIUM_TTL)

print(f"User data: {user_data}")

Memcached remains one of the most reliable and battle-tested caching solutions available. Its simplicity, performance, and wide adoption make it an excellent choice for accelerating web applications and reducing database load in distributed environments.