Memcached

cache librarydistributed cachehigh-speedlightweightsimple

GitHub Overview

linsomniac/python-memcached

A python memcached client library.

Stars467
Watchers21
Forks201
Created:March 27, 2013
Language:Python
License:-

Topics

None

Star History

linsomniac/python-memcached Star History
Data as of: 10/22/2025, 08:07 AM

Library

Memcached

Overview

Memcached is a high-speed distributed memory object caching system. With lightweight cache implementation using simple key-value storage, it achieves high throughput through multi-threaded design and specializes in reducing database load for dynamic web applications. In 2025, Memcached stands alongside Redis as a leader in high-throughput string caching, continuously chosen in environments prioritizing simplicity and performance, especially large-scale web services.

Details

Memcached 1.6 series is actively developed as of 2025, featuring TLS encryption support, proxy functionality, hot key detection, and automatic slab rebalancing. Multi-threaded design efficiently utilizes multiple CPU cores, delivering superior performance over Redis particularly in storing and managing large datasets. As a pure caching solution without external dependencies, it provides maximum simplicity and predictability. Specialized for temporary data storage without persistence functionality, enabling lightweight and memory-efficient operations.

Key Features

  • Multi-threaded Design: High throughput through efficient utilization of multiple CPU cores
  • Simple Protocol: Concise text-based command system
  • Horizontal Scaling: Distributed placement through client-side hashing
  • Memory Efficiency: Efficient memory management with slab allocator
  • High Stability: Production track record of over 10 years
  • Lightweight Implementation: Minimal feature set and low resource consumption

Pros and Cons

Pros

  • High performance with sub-millisecond response times comparable to Redis
  • Advantage with large datasets through multi-threading
  • Extremely simple configuration and maintenance
  • Predictable memory usage (key-value only)
  • Lightweight and stable operations
  • Rich client libraries and framework integrations

Cons

  • No data persistence functionality (data loss on server restart)
  • No complex data structure support (strings only)
  • Lack of clustering and replication features
  • Limited high availability features (no failover)
  • No publish/subscribe functionality
  • Limited atomic operations (increment/decrement only)

Reference Pages

Code Examples

Installation and Setup

# Installation on Ubuntu/Debian
sudo apt update
sudo apt install memcached

# Installation on CentOS/RHEL
sudo yum install memcached

# Launch with Docker
docker run --name memcached-server -p 11211:11211 -d memcached:1.6

# Start service and check status
sudo systemctl start memcached
sudo systemctl enable memcached
sudo systemctl status memcached

# Check configuration file
cat /etc/memcached.conf

Basic Cache Operations

# Connection test with telnet
telnet localhost 11211

# Basic SET/GET operations
set user_name 0 3600 4
john
STORED

get user_name
VALUE user_name 0 4
john
END

# Numeric increment/decrement
set counter 0 0 1
0
STORED

incr counter 5
5

decr counter 2
3

# Multiple key retrieval
set key1 0 3600 6
value1
STORED

set key2 0 3600 6  
value2
STORED

get key1 key2
VALUE key1 0 6
value1
VALUE key2 0 6
value2
END

Using Client Libraries (Python)

import memcache

# Server connection
mc = memcache.Client(['127.0.0.1:11211'], debug=0)

# Basic cache operations
mc.set("user:1", "John Doe", time=3600)
user = mc.get("user:1")
print(user)  # "John Doe"

# Multiple key operations
mc.set_multi({
    "user:1": "John Doe",
    "user:2": "Jane Smith", 
    "user:3": "Bob Johnson"
}, time=3600)

users = mc.get_multi(["user:1", "user:2", "user:3"])
print(users)  # {'user:1': 'John Doe', 'user:2': 'Jane Smith', 'user:3': 'Bob Johnson'}

# Conditional update (CAS)
mc.set("inventory_count", 100)
gets_result = mc.gets("inventory_count")
if gets_result:
    # CAS (Compare And Swap)
    success = mc.cas("inventory_count", 95)
    print(f"CAS Success: {success}")

# TTL check and deletion
mc.set("temp_data", "temporary", time=60)
mc.delete("temp_data")

# Get statistics
stats = mc.get_stats()
print(stats)

Cache Implementation with Node.js

const memjs = require('memjs');

// Server connection
const client = memjs.Client.create('localhost:11211');

// Promise-based operations
async function cacheOperations() {
    try {
        // Set data
        await client.set('session:abc123', 'user_data', {expires: 3600});
        
        // Get data
        const result = await client.get('session:abc123');
        if (result.value) {
            console.log('Cached data:', result.value.toString());
        }
        
        // Multiple operations
        await Promise.all([
            client.set('key1', 'value1', {expires: 1800}),
            client.set('key2', 'value2', {expires: 1800}),
            client.set('key3', 'value3', {expires: 1800})
        ]);
        
        // Increment
        await client.set('view_count', '100');
        const newCount = await client.increment('view_count', 1);
        console.log('New view count:', newCount.value?.toString());
        
        // Delete
        await client.delete('session:abc123');
        
    } catch (error) {
        console.error('Cache error:', error);
    } finally {
        client.close();
    }
}

cacheOperations();

Distributed Memcached Cluster

import memcache
import hashlib

# Distributed configuration with multiple servers
servers = [
    '192.168.1.10:11211',
    '192.168.1.11:11211', 
    '192.168.1.12:11211'
]

mc = memcache.Client(servers, debug=0)

# Distribution with consistent hashing
def get_server_for_key(key, servers):
    hash_value = int(hashlib.md5(key.encode()).hexdigest(), 16)
    return servers[hash_value % len(servers)]

# Load balancing with round-robin
class DistributedMemcache:
    def __init__(self, servers):
        self.servers = servers
        self.clients = {
            server: memcache.Client([server]) 
            for server in servers
        }
    
    def _get_client(self, key):
        server = get_server_for_key(key, self.servers)
        return self.clients[server]
    
    def set(self, key, value, time=0):
        client = self._get_client(key)
        return client.set(key, value, time)
    
    def get(self, key):
        client = self._get_client(key)
        return client.get(key)
    
    def delete(self, key):
        client = self._get_client(key)
        return client.delete(key)

# Usage example
distributed_cache = DistributedMemcache(servers)
distributed_cache.set("user:1000", "John Doe", time=3600)
user_data = distributed_cache.get("user:1000")

Session Management Implementation

<?php
// Session management in PHP
class MemcachedSessionHandler {
    private $memcached;
    private $ttl = 3600; // 1 hour
    
    public function __construct($servers) {
        $this->memcached = new Memcached();
        $this->memcached->addServers($servers);
    }
    
    public function read($session_id) {
        $data = $this->memcached->get("session:$session_id");
        return $data === false ? '' : $data;
    }
    
    public function write($session_id, $data) {
        return $this->memcached->set("session:$session_id", $data, $this->ttl);
    }
    
    public function destroy($session_id) {
        return $this->memcached->delete("session:$session_id");
    }
    
    public function gc($maxlifetime) {
        // Memcached automatically removes expired data with TTL
        return true;
    }
}

// Register session handler
$servers = [
    ['localhost', 11211],
    ['192.168.1.20', 11211]
];

$handler = new MemcachedSessionHandler($servers);
session_set_save_handler($handler, true);

// Start session
session_start();
$_SESSION['user_id'] = 1234;
$_SESSION['username'] = 'john_doe';
?>

Monitoring and Performance Optimization

# Check statistics
echo "stats" | nc localhost 11211

# Important statistics
# curr_connections: Current connections
# get_hits: Hit count
# get_misses: Miss count  
# hit_ratio: Hit ratio = hits/(hits+misses)

# Check slab information (memory usage)
echo "stats slabs" | nc localhost 11211

# Check item information
echo "stats items" | nc localhost 11211

# Performance monitoring script
#!/bin/bash
while true; do
    echo "=== Memcached Stats $(date) ==="
    echo "stats" | nc localhost 11211 | grep -E "(get_hits|get_misses|curr_connections|bytes)"
    echo ""
    sleep 10
done