Redis

High-performance in-memory key-value store. Functions as data structure server, cache, and message broker. Supports rich data types and atomic operations.

Cache ServerIn-Memory DatabaseNoSQLHigh PerformanceReal-timeData StructuresPub/Sub

Cache Server

Redis

Overview

Redis is a high-performance in-memory data structure store that functions as a cache, database, and message broker. Supporting rich data types and atomic operations, Redis enables sub-millisecond response times and can handle millions of requests per second. With its single-threaded event-driven architecture and optional I/O threading, Redis ensures data consistency while delivering exceptional performance. Redis 8.0 unifies Redis Stack and Community Edition into a single distribution, incorporating vector search capabilities and over 30 performance optimizations for the largest performance leap in Redis history.

Details

Redis 2024 edition maintains its position as the de facto standard for in-memory databases, chosen by developers worldwide for real-time data processing. The unified Redis 8.0 distribution integrates previously separate modules including JSON, time series, search, and AI-focused vector similarity search into a single package. Enhanced I/O threading implementation enables better multi-core utilization, while improved replication mechanisms with RDB channel replication reduce CPU load and increase robustness. The new Vector Set data structure supports high-dimensional vector embeddings for semantic search and recommendation systems, making Redis ideal for AI-powered applications.

Key Features

  • Rich Data Structures: Strings, hashes, lists, sets, sorted sets, streams, JSON, and vector sets
  • Sub-millisecond Latency: In-memory processing with optimized data structures
  • Multiple Persistence Options: RDB snapshots and AOF (Append-Only File) logging
  • High Availability: Master-Slave replication and Redis Cluster support
  • Advanced Features: Pub/Sub messaging, Lua scripting, and Redis modules
  • AI Integration: Vector similarity search with the new Vector Set data structure

Pros and Cons

Pros

  • Exceptional performance with sub-millisecond latency for most operations
  • Rich ecosystem with extensive data types and built-in modules
  • Flexible persistence options balancing performance and durability
  • Strong clustering and replication capabilities for high availability
  • Active development with regular performance improvements and new features
  • Comprehensive client library support across all major programming languages

Cons

  • Memory-intensive as all data must fit in RAM
  • Single-threaded main processing may limit CPU-bound workloads
  • Complex configuration and tuning required for optimal performance
  • Memory management becomes critical at large scales
  • License changes (SSPL/ELv2) in recent versions may affect commercial usage
  • Clustering setup and management requires careful planning

Reference Links

Code Examples

Installation and Basic Setup

# Ubuntu/Debian installation
sudo apt update
sudo apt install redis-server

# Start Redis service
sudo systemctl start redis-server
sudo systemctl enable redis-server

# CentOS/RHEL installation
sudo dnf install epel-release
sudo dnf install redis

# Start Redis service
sudo systemctl start redis
sudo systemctl enable redis

# macOS (Homebrew)
brew install redis

# Start Redis service
brew services start redis

# Docker deployment
docker run --name redis-cache \
  -p 6379:6379 \
  -d redis:8.0-alpine

# Docker with persistence and configuration
docker run --name redis-cache \
  -v redis_data:/data \
  -v /path/to/redis.conf:/usr/local/etc/redis/redis.conf \
  -p 6379:6379 \
  -d redis:8.0-alpine redis-server /usr/local/etc/redis/redis.conf

# Source build
sudo apt-get install build-essential tcl
wget https://download.redis.io/redis-stable.tar.gz
tar xzf redis-stable.tar.gz
cd redis-stable
make
make test
sudo make install

Basic Configuration (redis.conf)

# Network configuration
bind 127.0.0.1 ::1
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300

# General configuration
daemonize yes
pidfile /var/run/redis/redis-server.pid
loglevel notice
logfile /var/log/redis/redis-server.log

# Database configuration
databases 16

# Snapshot configuration
dir /var/lib/redis
dbfilename dump.rdb
save 60 1000
save 300 100
save 900 1

# AOF configuration
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# Memory configuration
maxmemory 256mb
maxmemory-policy allkeys-lru

# Security configuration
requirepass your_strong_password_here
# rename-command FLUSHDB ""
# rename-command FLUSHALL ""

# Replication configuration (slave)
# replicaof <masterip> <masterport>
# masterauth <master-password>

# Client configuration
maxclients 10000

# Redis 8.0 I/O threading (multi-core optimization)
io-threads 4
io-threads-do-reads yes

Basic String Operations

# Basic Set/Get operations
SET user:1000:name "John Doe"
GET user:1000:name
MSET user:1001:name "Jane Smith" user:1001:email "[email protected]"
MGET user:1001:name user:1001:email

# Numeric operations
SET counter 10
INCR counter
INCRBY counter 5
DECR counter
DECRBY counter 2

# String operations with expiration
SETEX session:abc123 3600 "session_data"
SET temp_key "value" EX 300
TTL session:abc123
EXPIRE temp_key 600

# Conditional operations
SET lock:resource "process_id" NX EX 30
GET lock:resource

# String manipulation
APPEND message "Hello"
APPEND message " World"
GET message
STRLEN message
GETRANGE message 0 4

Hash Operations

# Hash field operations
HSET user:1000 name "John Doe" email "[email protected]" age 30
HGET user:1000 name
HMGET user:1000 name email
HGETALL user:1000

# Hash manipulation
HINCRBY user:1000 age 1
HINCRBYFLOAT user:1000 score 10.5
HEXISTS user:1000 email
HDEL user:1000 age
HKEYS user:1000
HVALS user:1000
HLEN user:1000

# Redis 8.0 field-level expiration
HEXPIRE user:1000 3600 email
HPEXPIRE user:1000 300000 session_id
HEXPIRETIME user:1000 email

List Operations

# List manipulation
LPUSH tasks "task1" "task2" "task3"
RPUSH queue "item1" "item2"
LLEN tasks

# List retrieval
LPOP tasks
RPOP queue
LRANGE tasks 0 -1
LINDEX tasks 0

# Blocking operations
BLPOP tasks 30
BRPOP queue 30
BRPOPLPUSH source destination 10

# List modification
LSET tasks 0 "updated_task"
LINSERT tasks BEFORE "task2" "new_task"
LTRIM tasks 0 9

Set Operations

# Set manipulation
SADD tags "redis" "cache" "database" "nosql"
SMEMBERS tags
SISMEMBER tags "redis"
SCARD tags

# Set operations
SADD set1 "a" "b" "c"
SADD set2 "b" "c" "d"
SINTER set1 set2
SUNION set1 set2
SDIFF set1 set2

# Random operations
SRANDMEMBER tags 2
SPOP tags
SMOVE set1 set2 "a"

Sorted Set Operations

# Sorted set manipulation
ZADD leaderboard 100 "player1" 200 "player2" 150 "player3"
ZRANGE leaderboard 0 -1 WITHSCORES
ZREVRANGE leaderboard 0 2 WITHSCORES

# Score operations
ZINCRBY leaderboard 50 "player1"
ZSCORE leaderboard "player1"
ZRANK leaderboard "player1"
ZREVRANK leaderboard "player1"

# Range queries
ZRANGEBYSCORE leaderboard 100 200
ZREVRANGEBYSCORE leaderboard 200 100
ZCOUNT leaderboard 100 200
ZREMRANGEBYSCORE leaderboard 0 99

# Lexicographical operations
ZADD words 0 "apple" 0 "banana" 0 "cherry"
ZRANGEBYLEX words [a [c
ZLEXCOUNT words [a [z

Advanced Features

Pub/Sub Messaging

# Publisher
PUBLISH news:tech "Redis 8.0 released with vector search!"
PUBLISH news:sports "World Championship finals tonight"

# Subscriber
SUBSCRIBE news:tech news:sports
PSUBSCRIBE news:*

# Channel information
PUBSUB CHANNELS
PUBSUB NUMSUB news:tech
PUBSUB NUMPAT

# Pattern matching
PSUBSCRIBE user:*:notifications
PUNSUBSCRIBE user:*:notifications

Redis Streams

# Stream operations
XADD events * user "john" action "login" timestamp "2024-01-15T10:00:00Z"
XADD events * user "jane" action "logout" timestamp "2024-01-15T10:05:00Z"

# Stream reading
XRANGE events - +
XREAD COUNT 10 STREAMS events 0
XREVRANGE events + - COUNT 5

# Consumer groups
XGROUP CREATE events mygroup 0
XREADGROUP GROUP mygroup consumer1 COUNT 1 STREAMS events >
XACK events mygroup 1642234567890-0

# Stream information
XINFO STREAM events
XLEN events
XTRIM events MAXLEN 1000

Lua Scripting

-- Atomic increment with expiration (Lua script)
local current = redis.call('GET', KEYS[1])
if current == false then
    current = 0
else
    current = tonumber(current)
end

current = current + tonumber(ARGV[1])
redis.call('SET', KEYS[1], current)
redis.call('EXPIRE', KEYS[1], ARGV[2])
return current
# Script execution
EVAL "local current = redis.call('GET', KEYS[1]); if current == false then current = 0 else current = tonumber(current) end; current = current + tonumber(ARGV[1]); redis.call('SET', KEYS[1], current); redis.call('EXPIRE', KEYS[1], ARGV[2]); return current" 1 counter 10 3600

# Script loading and execution
SCRIPT LOAD "script_content_here"
EVALSHA sha1_hash 1 counter 5 1800

# Script management
SCRIPT EXISTS sha1_hash
SCRIPT FLUSH
SCRIPT KILL

Redis 8.0 Vector Search (Beta)

# Vector Set creation and operations
VADD vectors:embeddings 1.0 2.0 3.0 4.0 "doc1"
VADD vectors:embeddings 2.0 3.0 4.0 5.0 "doc2"
VADD vectors:embeddings 3.0 4.0 5.0 6.0 "doc3"

# Vector similarity search
VSIM vectors:embeddings 1.5 2.5 3.5 4.5 COUNT 2

# Vector operations
VCARD vectors:embeddings
VREM vectors:embeddings "doc1"
VDUMP vectors:embeddings

Redis Cluster Configuration

# Cluster node configuration (redis.conf)
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes

# Create cluster directories
mkdir 7000 7001 7002 7003 7004 7005

# Generate configuration files for each node
for port in 7000 7001 7002 7003 7004 7005; do
  cat > $port/redis.conf << EOF
port $port
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
appendonly yes
dir ./$port
EOF
done

# Start cluster nodes
for port in 7000 7001 7002 7003 7004 7005; do
  cd $port
  redis-server ./redis.conf &
  cd ..
done

# Create cluster
redis-cli --cluster create \
  127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \
  127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
  --cluster-replicas 1

# Cluster management
redis-cli --cluster add-node 127.0.0.1:7006 127.0.0.1:7000
redis-cli --cluster reshard 127.0.0.1:7000
redis-cli --cluster check 127.0.0.1:7000

Cluster Operations

# Cluster information
CLUSTER INFO
CLUSTER NODES
CLUSTER SLOTS

# Key slot information
CLUSTER KEYSLOT mykey

# Node management
CLUSTER MEET 127.0.0.1 7006
CLUSTER FORGET node_id
CLUSTER REPLICATE master_node_id

# Manual failover
CLUSTER FAILOVER
CLUSTER FAILOVER FORCE

Replication Setup

# Master configuration (redis.conf)
bind 0.0.0.0
port 6379
requirepass master_password

# Replica configuration (redis.conf)
replicaof master_ip 6379
masterauth master_password
replica-read-only yes
replica-serve-stale-data yes

Replication Management

# Replication information
INFO replication

# Promote replica to master
REPLICAOF NO ONE

# Change master
REPLICAOF new_master_ip 6379

# Replica synchronization
PSYNC replication_id offset

Performance Monitoring and Optimization

# Server information
INFO server
INFO clients
INFO memory
INFO persistence
INFO stats
INFO commandstats

# Memory analysis
MEMORY USAGE key_name
MEMORY STATS
MEMORY DOCTOR

# Performance monitoring
SLOWLOG GET 10
SLOWLOG LEN
SLOWLOG RESET

# Real-time monitoring
MONITOR

# Latency monitoring
LATENCY LATEST
LATENCY HISTORY command
LATENCY RESET

# Client information
CLIENT LIST
CLIENT INFO
CLIENT KILL ip:port

Python Client Implementation

import redis
import time

# Redis connection
r = redis.Redis(
    host='localhost',
    port=6379,
    db=0,
    decode_responses=True,
    socket_keepalive=True,
    socket_keepalive_options={},
    health_check_interval=30
)

# Connection test
try:
    response = r.ping()
    print(f"Redis connection successful: {response}")
    
    # Server information
    info = r.info()
    print(f"Redis version: {info.get('redis_version')}")
    print(f"Used memory: {info['used_memory_human']}")
    print(f"Connected clients: {info['connected_clients']}")
    
except redis.ConnectionError as e:
    print(f"Redis connection failed: {e}")

# Basic operations
r.set('user:1000:name', 'John Doe')
r.set('user:1000:age', 30)
r.setex('session:abc123', 3600, 'session_data')

name = r.get('user:1000:name')
age = r.get('user:1000:age')
print(f"User: {name}, Age: {age}")

# Hash operations
user_data = {
    'name': 'Jane Smith',
    'age': '25',
    'city': 'New York',
    'job': 'Engineer'
}

r.hset('user:2000', mapping=user_data)
user_info = r.hgetall('user:2000')
print(f"User info: {user_info}")

# List operations
r.lpush('notifications', 'Message 1', 'Message 2', 'Message 3')
notifications_count = r.llen('notifications')
print(f"Notification count: {notifications_count}")

latest_notification = r.lpop('notifications')
print(f"Latest notification: {latest_notification}")

# Set operations
r.sadd('tags:python', 'web', 'api', 'automation', 'data-science')
r.sadd('tags:javascript', 'web', 'frontend', 'nodejs', 'api')

common_tags = r.sinter('tags:python', 'tags:javascript')
print(f"Common tags: {common_tags}")

# Sorted set operations
r.zadd('leaderboard', {
    'player1': 1000, 
    'player2': 1500, 
    'player3': 800,
    'player4': 1200
})

top_players = r.zrevrange('leaderboard', 0, 2, withscores=True)
print(f"Top 3 players: {top_players}")

# Pipeline usage for batch operations
pipe = r.pipeline()
for i in range(1000):
    pipe.set(f"batch:key{i}", f"value{i}")
pipe.execute()

# Pub/Sub messaging
import threading

def message_handler():
    pubsub = r.pubsub()
    pubsub.subscribe('notifications')
    
    for message in pubsub.listen():
        if message['type'] == 'message':
            print(f"Received: {message['data']}")

# Start message handler thread
handler_thread = threading.Thread(target=message_handler)
handler_thread.daemon = True
handler_thread.start()

# Publish message
r.publish('notifications', 'Hello from Redis!')
time.sleep(1)

Node.js Implementation

const Redis = require('ioredis');

// Redis connection
const redis = new Redis({
  host: 'localhost',
  port: 6379,
  maxRetriesPerRequest: 3,
  retryDelayOnFailover: 100,
  enableOfflineQueue: false,
  lazyConnect: true
});

// Connection test
async function testConnection() {
  try {
    const result = await redis.ping();
    console.log('Connection successful:', result);
  } catch (error) {
    console.error('Connection failed:', error);
  }
}

// Basic operations
async function basicOperations() {
  // String operations
  await redis.set('user:session:12345', JSON.stringify({
    userId: 12345,
    loginTime: new Date().toISOString(),
    permissions: ['read', 'write']
  }), 'EX', 3600);

  const sessionData = await redis.get('user:session:12345');
  console.log('Session:', JSON.parse(sessionData));

  // Hash operations
  await redis.hmset('product:1001', {
    name: 'Redis Server',
    price: 0,
    category: 'database',
    inStock: true
  });

  const product = await redis.hgetall('product:1001');
  console.log('Product:', product);

  // List operations (queue)
  await redis.lpush('job_queue', 
    JSON.stringify({ id: 1, type: 'email', data: '[email protected]' }),
    JSON.stringify({ id: 2, type: 'sms', data: '+1234567890' })
  );

  const job = await redis.brpop('job_queue', 10);
  if (job) {
    console.log('Processing job:', JSON.parse(job[1]));
  }
}

// Performance testing
async function performanceTest() {
  const operations = 10000;
  const batchSize = 100;
  
  console.log(`Starting performance test: ${operations} operations`);
  const startTime = Date.now();

  for (let i = 0; i < operations; i += batchSize) {
    const pipeline = redis.pipeline();
    
    for (let j = 0; j < batchSize && (i + j) < operations; j++) {
      const key = `perf:test:${i + j}`;
      const value = `value_${i + j}_${Date.now()}`;
      pipeline.set(key, value);
    }
    
    await pipeline.exec();
  }

  const endTime = Date.now();
  const duration = endTime - startTime;
  const opsPerSec = Math.round(operations / (duration / 1000));
  
  console.log(`Completed ${operations} operations in ${duration}ms`);
  console.log(`Performance: ${opsPerSec} ops/sec`);
}

// Execute operations
async function main() {
  await testConnection();
  await basicOperations();
  await performanceTest();
}

main().catch(console.error);

Security Configuration

# Authentication
requirepass your_strong_password

# ACL (Access Control Lists) - Redis 6.0+
# Create users with specific permissions
ACL SETUSER alice on >alice_password ~* &* +@all
ACL SETUSER bob on >bob_password ~app:* +@read +@write -@dangerous

# Network security
bind 127.0.0.1 192.168.1.100
protected-mode yes

# Disable dangerous commands
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command EVAL ""
rename-command DEBUG ""
rename-command SHUTDOWN REDIS_SHUTDOWN_2024

Production Best Practices

# Database management
SELECT 1
DBSIZE
FLUSHDB
FLUSHALL

# Background operations
BGSAVE
BGREWRITEAOF

# Configuration management
CONFIG GET maxmemory
CONFIG SET maxmemory 512mb
CONFIG REWRITE

# Key analysis
SCAN 0 MATCH pattern* COUNT 1000
TYPE keyname
EXISTS key1 key2 key3
MEMORY USAGE keyname

# Performance analysis
INFO commandstats
SLOWLOG GET 10
CLIENT LIST

Use Case Examples

Session Management

# Session storage
HSET session:user123 name "John Doe" email "[email protected]" last_activity "2024-01-15T10:00:00Z"
EXPIRE session:user123 1800

# Session validation
HGETALL session:user123
TTL session:user123

Distributed Locking

# Acquire lock
SET lock:resource:123 "process_id" NX EX 30

# Release lock (atomic with Lua script)
EVAL "if redis.call('GET', KEYS[1]) == ARGV[1] then return redis.call('DEL', KEYS[1]) else return 0 end" 1 lock:resource:123 process_id

Rate Limiting

# Sliding window rate limiting
EVAL "local key = KEYS[1]; local window = tonumber(ARGV[1]); local limit = tonumber(ARGV[2]); local current = redis.call('INCR', key); if current == 1 then redis.call('EXPIRE', key, window) end; return current <= limit" 1 rate:api:user123 60 100

Caching Pattern

import redis
import json

r = redis.Redis(host='localhost', port=6379, db=0)

def get_user_with_cache(user_id):
    cache_key = f"user:{user_id}"
    
    # Try cache first
    cached_user = r.get(cache_key)
    if cached_user:
        return json.loads(cached_user)
    
    # Cache miss - fetch from database
    user = fetch_user_from_database(user_id)
    
    # Cache the result
    r.setex(cache_key, 3600, json.dumps(user))
    
    return user

def invalidate_user_cache(user_id):
    cache_key = f"user:{user_id}"
    r.delete(cache_key)

Redis continues to be the most popular in-memory data store, powering everything from simple caches to complex real-time applications. With Redis 8.0's unified distribution and enhanced AI capabilities, it remains the go-to choice for developers building high-performance, scalable applications that require sub-millisecond response times and rich data modeling capabilities.