KeyDB
Redis-compatible multi-threaded NoSQL database. Forked from Redis in 2019, achieving high performance with multi-threading support.
Cache Server
KeyDB
Overview
KeyDB is a "Redis-compatible multi-threaded NoSQL database" developed as a high-performance cache server. Born in 2019 as a fork from Redis, it solves Redis's greatest limitation of single-threaded processing with multi-threaded support. It achieves high performance improvements through multi-threaded processing while maintaining complete Redis compatibility, making it a notable next-generation cache solution that enables seamless migration from existing Redis applications.
Details
KeyDB 2025 edition maintains its established position as a pioneer in Redis multi-threading. Developed and maintained by Snapchat, it has proven stability and reliability through over 6 years of track record. Based on Redis 6.0 series features, it achieves linear performance improvements proportional to CPU core count through multi-threaded processing. It delivers significantly superior performance to traditional Redis especially in high-concurrency workloads, and Redis compatibility allows existing client libraries and tools to be used as-is.
Key Features
- Multi-threaded Processing: Linear performance improvement proportional to CPU core count
- Full Redis Compatibility: Seamless migration from existing Redis applications
- Active Replication: High availability and data consistency assurance
- Flash Storage: Automatic data migration during memory shortage
- Cluster Support: Horizontal scaling and distributed processing
- SSL/TLS Support: Secure communication and data protection
Pros and Cons
Pros
- Easy migration from existing systems due to full Redis compatibility
- Performance improvement proportional to CPU core count through multi-threaded processing
- Excellent throughput performance in high-concurrency workloads
- Existing Redis client libraries and tools can be used as-is
- High availability through active replication
- Large-volume data support through flash storage
Cons
- Limited development and operational information compared to Redis ecosystem
- Constraints in high-concurrency scenarios with performance limits in extreme parallel processing
- Potential time lag in following Redis latest features
- Smaller community size than Redis with limited third-party support
- Risk of unexpected behavior in complex workloads
- Uncertainty about enterprise support and long-term maintenance
Reference Pages
Code Examples
Basic Installation and Setup
# Launch KeyDB using Docker
docker run -d --name keydb-server -p 6379:6379 eqalpha/keydb
# Launch with configuration file
docker run -d --name keydb-server \
-p 6379:6379 \
-v /path/to/keydb.conf:/etc/keydb/keydb.conf \
eqalpha/keydb keydb-server /etc/keydb/keydb.conf
# Connect using KeyDB client to verify
docker exec -it keydb-server keydb-cli
127.0.0.1:6379> PING
PONG
# Build from source
git clone https://github.com/snapchat/keydb.git
cd keydb
make -j4
make install
# Start KeyDB server
keydb-server /etc/keydb/keydb.conf
# Connect KeyDB client
keydb-cli -h localhost -p 6379
Multi-threaded Configuration and Performance Optimization
# Basic multi-threaded configuration in keydb.conf
# Number of server threads (recommended: 1/2-3/4 of CPU cores)
server-threads 4
# Enable thread affinity
server-thread-affinity yes
# Active client balancing
active-client-balancing yes
# Memory usage limit
maxmemory 2gb
maxmemory-policy allkeys-lru
# Persistence configuration
save 900 1
save 300 10
save 60 10000
# Enable RDB compression
rdbcompression yes
# AOF configuration
appendonly yes
appendfsync everysec
Performance Testing and Benchmarking
# Basic benchmark (using Redis-benchmark tool)
redis-benchmark -h localhost -p 6379 -t get,set -n 100000 -c 50
# Multi-threaded performance verification
redis-benchmark -h localhost -p 6379 -t set -n 1000000 -c 100 -d 1024
# Various operation benchmarks
redis-benchmark -h localhost -p 6379 -t ping,set,get,incr,lpush,rpush,lpop,rpop,sadd,hset,spop,lrange,mset -n 100000
# Pipeline performance test
redis-benchmark -h localhost -p 6379 -t set,get -n 100000 -P 10
# Performance measurement with varying concurrent clients
for clients in 10 50 100 200; do
echo "Testing with $clients clients:"
redis-benchmark -h localhost -p 6379 -t get,set -n 50000 -c $clients
done
# KeyDB-specific statistics verification
keydb-cli --stat
Active Replication and Cluster Configuration
# Master server configuration (keydb-master.conf)
# Basic settings
bind 0.0.0.0
port 6379
server-threads 4
# Enable active replication
active-replica yes
replica-read-only no
# Replica server configuration (keydb-replica.conf)
bind 0.0.0.0
port 6380
server-threads 4
# Specify master server
replicaof 127.0.0.1 6379
active-replica yes
replica-read-only no
# KeyDB cluster node configuration
cluster-enabled yes
cluster-config-file nodes-6379.conf
cluster-node-timeout 15000
cluster-announce-ip 192.168.1.100
cluster-announce-port 6379
# Start master and replica
keydb-server /path/to/keydb-master.conf
keydb-server /path/to/keydb-replica.conf
# Create cluster (6-node configuration: 3 masters + 3 replicas)
keydb-cli --cluster create \
192.168.1.100:6379 192.168.1.101:6379 192.168.1.102:6379 \
192.168.1.100:6380 192.168.1.101:6380 192.168.1.102:6380 \
--cluster-replicas 1
# Check cluster status
keydb-cli -c -h 192.168.1.100 -p 6379 cluster nodes
keydb-cli -c -h 192.168.1.100 -p 6379 cluster info
Programming Language Usage Examples
# Python (using redis-py library)
import redis
import time
from concurrent.futures import ThreadPoolExecutor
# KeyDB connection configuration
r = redis.Redis(
host='localhost',
port=6379,
decode_responses=True,
socket_keepalive=True,
socket_keepalive_options={},
health_check_interval=30
)
# Connection verification
print(r.ping()) # True
# Basic operations
r.set('key1', 'value1')
r.set('key2', 'value2', ex=60) # expire in 60 seconds
print(r.get('key1')) # value1
# Hash operations
r.hset('user:1001', mapping={
'name': 'John Doe',
'email': '[email protected]',
'age': 30
})
user_data = r.hgetall('user:1001')
print(user_data)
# List operations
r.lpush('tasks', 'task1', 'task2', 'task3')
task = r.rpop('tasks')
print(f"Processing: {task}")
# Set operations
r.sadd('online_users', 'user1', 'user2', 'user3')
online_users = r.smembers('online_users')
print(f"Online users: {online_users}")
# Sorted Set operations
r.zadd('leaderboard', {'player1': 1000, 'player2': 1500, 'player3': 800})
top_players = r.zrevrange('leaderboard', 0, 2, withscores=True)
print(f"Top players: {top_players}")
# Performance test (leveraging multi-threading)
def stress_test_worker(worker_id, operations=1000):
local_r = redis.Redis(host='localhost', port=6379, decode_responses=True)
start_time = time.time()
for i in range(operations):
key = f"test:{worker_id}:{i}"
value = f"value_{worker_id}_{i}"
local_r.set(key, value)
retrieved = local_r.get(key)
assert retrieved == value
end_time = time.time()
print(f"Worker {worker_id}: {operations} ops in {end_time - start_time:.2f}s")
# Concurrent processing test with 10 threads
with ThreadPoolExecutor(max_workers=10) as executor:
futures = []
for worker_id in range(10):
future = executor.submit(stress_test_worker, worker_id, 1000)
futures.append(future)
# Wait for all threads to complete
for future in futures:
future.result()
# Pipeline usage
pipe = r.pipeline()
for i in range(1000):
pipe.set(f"pipeline:key{i}", f"value{i}")
pipe.execute()
# Pub/Sub (messaging)
import threading
def message_handler():
pubsub = r.pubsub()
pubsub.subscribe('notifications')
for message in pubsub.listen():
if message['type'] == 'message':
print(f"Received: {message['data']}")
# Start message receiver thread
handler_thread = threading.Thread(target=message_handler)
handler_thread.daemon = True
handler_thread.start()
# Send message
r.publish('notifications', 'Hello from KeyDB!')
time.sleep(1)
Node.js Usage Examples
// Node.js (using ioredis library)
const Redis = require('ioredis');
// KeyDB connection configuration
const keydb = new Redis({
host: 'localhost',
port: 6379,
maxRetriesPerRequest: 3,
retryDelayOnFailover: 100,
enableOfflineQueue: false,
lazyConnect: true
});
// Connection verification
async function testConnection() {
try {
const result = await keydb.ping();
console.log('Connection successful:', result);
} catch (error) {
console.error('Connection failed:', error);
}
}
// Basic CRUD operations
async function basicOperations() {
// String operations
await keydb.set('user:session:12345', JSON.stringify({
userId: 12345,
loginTime: new Date().toISOString(),
permissions: ['read', 'write']
}), 'EX', 3600); // expire in 1 hour
const sessionData = await keydb.get('user:session:12345');
console.log('Session:', JSON.parse(sessionData));
// Hash operations
await keydb.hmset('product:1001', {
name: 'KeyDB Server',
price: 0,
category: 'database',
inStock: true
});
const product = await keydb.hgetall('product:1001');
console.log('Product:', product);
// List operations (queue)
await keydb.lpush('job_queue',
JSON.stringify({ id: 1, type: 'email', data: '[email protected]' }),
JSON.stringify({ id: 2, type: 'sms', data: '+1234567890' })
);
const job = await keydb.brpop('job_queue', 10); // 10 second timeout
if (job) {
console.log('Processing job:', JSON.parse(job[1]));
}
}
// Performance measurement
async function performanceTest() {
const operations = 10000;
const batchSize = 100;
console.log(`Starting performance test: ${operations} operations`);
const startTime = Date.now();
// Batch execution with pipeline processing
for (let i = 0; i < operations; i += batchSize) {
const pipeline = keydb.pipeline();
for (let j = 0; j < batchSize && (i + j) < operations; j++) {
const key = `perf:test:${i + j}`;
const value = `value_${i + j}_${Date.now()}`;
pipeline.set(key, value);
}
await pipeline.exec();
}
const endTime = Date.now();
const duration = endTime - startTime;
const opsPerSec = Math.round(operations / (duration / 1000));
console.log(`Completed ${operations} operations in ${duration}ms`);
console.log(`Performance: ${opsPerSec} ops/sec`);
}
// Pub/Sub messaging
async function setupMessaging() {
const subscriber = new Redis({
host: 'localhost',
port: 6379
});
const publisher = new Redis({
host: 'localhost',
port: 6379
});
// Message reception setup
subscriber.subscribe('user_events', 'system_alerts');
subscriber.on('message', (channel, message) => {
console.log(`[${channel}] ${message}`);
// Message type-specific processing
if (channel === 'user_events') {
const event = JSON.parse(message);
console.log(`User ${event.userId} performed ${event.action}`);
} else if (channel === 'system_alerts') {
console.log(`System Alert: ${message}`);
}
});
// Message sending example
setInterval(async () => {
await publisher.publish('user_events', JSON.stringify({
userId: Math.floor(Math.random() * 1000),
action: 'login',
timestamp: new Date().toISOString()
}));
}, 5000);
// Send system alert
await publisher.publish('system_alerts', 'KeyDB performance test completed');
}
// Execution
async function main() {
await testConnection();
await basicOperations();
await performanceTest();
await setupMessaging();
}
main().catch(console.error);
Monitoring and Maintenance
# Check KeyDB statistics
keydb-cli info all
# Memory usage status
keydb-cli info memory
# Replication status
keydb-cli info replication
# Cluster information
keydb-cli cluster info
keydb-cli cluster nodes
# Real-time monitoring
keydb-cli --stat
# Check slow log
keydb-cli slowlog get 10
# Check connected clients
keydb-cli client list
# Dynamic configuration changes
keydb-cli config set maxmemory 4gb
keydb-cli config set save "900 1 300 10 60 10000"
# Create backup
keydb-cli bgsave
# Check database size
keydb-cli dbsize
# Flush all databases (caution)
keydb-cli flushall
# Flush specific database
keydb-cli -n 1 flushdb
# Performance diagnostics
keydb-cli --latency-history -i 1
keydb-cli --latency-dist
# Memory usage analysis
keydb-cli --memkeys
keydb-cli --bigkeys