Redis
In-memory data structure store. Can be used as cache, message broker, and database. Features overwhelming speed and real-time processing capabilities.
Database Server
Redis
Overview
Redis (REmote DIctionary Server) is the world's most popular in-memory data structure store, serving as a key-value database, cache, message broker, and streaming engine. Known for its exceptional speed and real-time processing capabilities, Redis provides microsecond-level latency for data access. It supports rich data types including strings, hashes, lists, sets, sorted sets, bitmaps, and more. Redis consistently ranks as one of the most loved databases by developers in StackOverflow surveys and serves as the backbone for numerous high-performance applications across industries.
Details
Redis 2025 edition has evolved into a comprehensive data platform with Redis Stack integration, offering advanced capabilities beyond simple caching. Redis Stack includes search, JSON, time series, probabilistic data structures, and graph processing. Operating entirely in memory enables extremely fast read and write operations while persistence options (RDB snapshots and AOF) ensure data durability. The system provides horizontal scaling through Redis Cluster, high availability via Sentinel, replication, Lua/JavaScript scripting, geospatial data processing, full-text search, and vector search capabilities. Following the 2024 license change, Redis will transition to a triple license model (RSALv2/SSPL/Apache) starting May 2025.
Key Features
- In-Memory High Performance: Microsecond-level latency for read and write operations
- Rich Data Types: Strings, hashes, lists, sets, sorted sets, bitmaps, streams, and more
- Persistence Options: RDB snapshots and AOF (Append Only File) for data durability
- Scalability: Redis Cluster with automatic sharding and horizontal distribution
- High Availability: Sentinel for automatic failover and master-slave replication
- Extended Capabilities: Redis Stack (search, JSON, time series, ML) for enhanced functionality
Pros and Cons
Pros
- Overwhelming market share in the cache server field with mature ecosystem
- Microsecond-level ultra-fast response time, ideal for real-time applications
- Rich data types and atomic operations enable efficient handling of complex data structures
- Lightweight and simple design reduces operational costs with excellent memory efficiency
- Pub/Sub functionality and Lua scripting enable advanced real-time processing
- Excellent portability and operability in Docker and Kubernetes environments
Cons
- Memory-based architecture makes large data storage expensive with capacity constraints
- 2024 license change requires legal consideration for commercial use cases
- Single-threaded architecture not suitable for CPU-intensive processing
- Limited functionality for complex queries and aggregation compared to RDBMS
- Cluster configuration requires advanced knowledge and proper design
- Incorrect persistence settings can lead to unexpected data loss risks
Reference Pages
Code Examples
Installation and Basic Setup
# Ubuntu/Debian installation
sudo apt update
sudo apt install redis-server redis-tools
# Redis Stack installation (with extended features)
curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list
sudo apt update
sudo apt install redis-stack-server
# CentOS/RHEL installation
sudo dnf install epel-release
sudo dnf install redis
# Docker deployment
docker run --name redis-server \
-p 6379:6379 \
-d redis:7.4-alpine
# Redis Stack deployment (with search, JSON, etc.)
docker run --name redis-stack \
-p 6379:6379 \
-p 8001:8001 \
-d redis/redis-stack:latest
# Redis CLI connection
redis-cli
redis-cli -h localhost -p 6379
# Basic configuration check
redis-cli ping # Should return PONG
redis-cli info server
Basic Operations and Data Types
# String operations
SET user:1:name "John Doe"
GET user:1:name
MSET user:1:email "[email protected]" user:1:age "30"
MGET user:1:name user:1:email user:1:age
# Data with expiration
SETEX session:abc123 3600 "user_session_data" # Expires in 1 hour
TTL session:abc123 # Check remaining time
EXPIRE user:1:name 86400 # Set expiration to 24 hours
# Counter operations
SET page_views 0
INCR page_views # Increment by 1
INCRBY page_views 10 # Increment by 10
DECR page_views # Decrement by 1
# Hash operations (object-like data)
HMSET user:2 name "Jane Smith" email "[email protected]" age 28
HGET user:2 name
HGETALL user:2
HINCRBY user:2 age 1 # Increment age by 1
HDEL user:2 email # Delete email field
# List operations (array-like data)
LPUSH messages "Hello" # Add to left end
RPUSH messages "Goodbye" # Add to right end
LRANGE messages 0 -1 # Get all elements
LPOP messages # Get and remove from left end
LLEN messages # Get list length
# Set operations (unique collection)
SADD tags:1 "Redis" "NoSQL" "Cache"
SMEMBERS tags:1 # Get all members
SISMEMBER tags:1 "Redis" # Check membership
SCARD tags:1 # Get set size
# Sorted set operations (scored collection)
ZADD ranking 100 "user1" 85 "user2" 95 "user3"
ZRANGE ranking 0 -1 WITHSCORES # Get by score order
ZREVRANGE ranking 0 2 WITHSCORES # Get top 3
ZSCORE ranking "user1" # Get specific user's score
JSON Operations and Redis Stack Features
# JSON operations (requires Redis Stack)
JSON.SET product:1 $ '{"name":"Laptop","price":999,"category":"Electronics","specs":{"cpu":"Intel i7","memory":"16GB"}}'
JSON.GET product:1
JSON.GET product:1 $.name # Get specific field
JSON.SET product:1 $.price 899 # Update price
JSON.NUMINCRBY product:1 $.price -50 # Decrease price by 50
# JSON array operations
JSON.SET products $ '{"items":[{"id":1,"name":"Product A"},{"id":2,"name":"Product B"}]}'
JSON.ARRAPPEND products $.items '{"id":3,"name":"Product C"}' # Add to array
JSON.ARRLEN products $.items # Get array length
# Search index creation (Redis Stack)
FT.CREATE product_idx ON JSON PREFIX 1 product: SCHEMA
$.name AS name TEXT
$.price AS price NUMERIC SORTABLE
$.category AS category TAG
# Full-text search execution
FT.SEARCH product_idx "@name:Laptop*" # Search products starting with Laptop
FT.SEARCH product_idx "@price:[500 1000]" # Price range search
FT.SEARCH product_idx "@category:{Electronics}" # Category search
# Time series data (Redis Stack)
TS.CREATE temperature:sensor1 RETENTION 86400000 # 24-hour retention
TS.ADD temperature:sensor1 * 25.6 # Add temperature at current time
TS.RANGE temperature:sensor1 - + # Get all data
TS.MRANGE - + FILTER area=tokyo # Get data from multiple sensors
Pub/Sub (Messaging) and Streams
# Pub/Sub (messaging)
# Subscriber side
SUBSCRIBE news:updates notifications:*
# Publisher side
PUBLISH news:updates "New article published"
PUBLISH notifications:user:123 "You have a new message"
# Pattern subscription
PSUBSCRIBE events:* # Subscribe to all channels starting with events:
# Redis Streams (persistent message queue)
# Stream creation and message addition
XADD user_actions * action "login" user_id "123" timestamp "2024-01-15T10:00:00Z"
XADD user_actions * action "purchase" user_id "456" product_id "789" amount "1500"
# Stream reading
XREAD COUNT 10 STREAMS user_actions 0 # Read 10 messages from beginning
XREAD BLOCK 0 STREAMS user_actions $ # Wait for new messages
# Consumer group creation
XGROUP CREATE user_actions processors $ MKSTREAM
XREADGROUP GROUP processors consumer1 COUNT 1 STREAMS user_actions >
# Acknowledgment
XACK user_actions processors <message-id>
# Stream information
XINFO STREAM user_actions
XLEN user_actions # Stream length
Lua Scripting and Advanced Operations
# Simple Lua script execution
EVAL "return redis.call('GET', KEYS[1])" 1 mykey
# Complex Lua script (atomic increment with limit check)
EVAL "
local current = redis.call('GET', KEYS[1])
if current == false then
current = 0
else
current = tonumber(current)
end
if current < tonumber(ARGV[1]) then
return redis.call('INCR', KEYS[1])
else
return nil
end
" 1 counter 100
# Script storage and execution
SCRIPT LOAD "return redis.call('GET', KEYS[1])" # Returns SHA1 hash
EVALSHA <sha1_hash> 1 mykey # Execute stored script
# Transaction-like processing
MULTI
SET account:1:balance 1000
SET account:2:balance 500
EXEC
# Conditional transaction (optimistic locking)
WATCH account:1:balance # Start watching
balance=$(redis-cli GET account:1:balance)
MULTI
SET account:1:balance $((balance - 100))
EXEC # Returns nil if watched data was modified
Redis Cluster (Distributed Setup) and Replication
# Redis Cluster configuration example (redis.conf)
cat > redis-cluster.conf << EOF
port 7000
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes
EOF
# Start cluster nodes (multiple ports)
redis-server redis-cluster-7000.conf &
redis-server redis-cluster-7001.conf &
redis-server redis-cluster-7002.conf &
# Create cluster
redis-cli --cluster create \
127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 \
127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 \
--cluster-replicas 1
# Cluster information
redis-cli -p 7000 cluster nodes
redis-cli -p 7000 cluster info
# Master-slave replication setup
# Slave configuration (redis.conf)
replicaof 127.0.0.1 6379
replica-read-only yes
# Sentinel configuration (high availability)
cat > sentinel.conf << EOF
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 30000
sentinel parallel-syncs mymaster 1
sentinel failover-timeout mymaster 180000
EOF
# Sentinel startup
redis-sentinel sentinel.conf
Performance Monitoring and Operations Management
# Performance information
INFO all # General information
INFO memory # Memory usage
INFO stats # Statistics
INFO replication # Replication information
INFO clients # Client connection information
# Slow query log
CONFIG SET slowlog-log-slower-than 10000 # Log queries slower than 10ms
SLOWLOG GET 10 # Get last 10 slow queries
SLOWLOG RESET # Clear log
# Client monitoring
CLIENT LIST # List connected clients
CLIENT KILL TYPE normal # Disconnect normal clients
MONITOR # Real-time monitoring (development only)
# Memory usage analysis
MEMORY USAGE mykey # Memory usage of specific key
MEMORY STATS # Memory statistics
SCAN 0 MATCH user:* COUNT 100 # Pattern matching key scan
# Benchmark testing
redis-benchmark -h localhost -p 6379 -c 50 -n 10000
redis-benchmark -t set,get -n 1000000 -q # SET/GET performance test
# Data backup and restore
# RDB snapshot creation
BGSAVE # Background save
LASTSAVE # Last snapshot time
# AOF rewrite
BGREWRITEAOF # Optimize AOF file
# Database clearing
FLUSHDB # Clear current database
FLUSHALL # Clear all databases (caution!)
# Configuration changes (persistence)
CONFIG SET save "900 1 300 10 60 10000" # Auto-save configuration
CONFIG REWRITE # Write to configuration file
Redis Stack Integrated Features
# Probabilistic data structures (Bloom Filter)
BF.RESERVE unique_users 1000000 0.01 # 1M elements, 1% false positive rate
BF.ADD unique_users "user123" # Add element
BF.EXISTS unique_users "user123" # Check existence (may have false positives)
# Count-Min Sketch (frequency estimation)
CMS.INITBYDIM user_events 1000 5 # Initialize
CMS.INCRBY user_events login 1 purchase 3 # Add events
CMS.QUERY user_events login purchase # Get frequencies
# Vector search (for machine learning)
FT.CREATE vector_idx ON HASH PREFIX 1 vector: SCHEMA
embedding VECTOR FLAT 6 TYPE FLOAT32 DIM 128 DISTANCE_METRIC COSINE
# Add vector
HSET vector:1 embedding "<128-dimensional float32 array>"
# Vector similarity search
FT.SEARCH vector_idx "*=>[KNN 5 @embedding $vec]" PARAMS 2 vec "<query vector>" DIALECT 2
# Geospatial data processing
GEOADD locations 139.6917 35.6895 "Tokyo"
GEOADD locations 135.5023 34.6937 "Osaka"
GEODIST locations "Tokyo" "Osaka" km # Calculate distance
GEORADIUS locations 139.6917 35.6895 100 km # Find locations within 100km radius