Database

Redis

Overview

Redis is the most popular in-memory data structure store. It supports rich data types including strings, hashes, lists, sets, sorted sets, and more, making it optimal for high-speed caching and session management. It can be used as a database, cache, and message broker.

Details

Redis (REmote DIctionary Server) is an open-source in-memory database developed by Salvatore Sanfilippo in 2009. Its main characteristic is storing all data in memory, achieving extremely fast read and write operations with sub-millisecond latency performance.

Key features of Redis:

  • High-speed performance through in-memory storage
  • Rich data structures (String, Hash, List, Set, Sorted Set, Bitmap, HyperLogLog, Geospatial, Stream)
  • Persistence capabilities (RDB, AOF)
  • Replication and clustering
  • Pub/Sub messaging
  • Lua script execution
  • TTL (Time To Live) for automatic expiration
  • Redis Stack (extended features like JSON, Search, Graph, Time Series)
  • Atomic operations
  • High availability and scalability

Pros and Cons

Pros

  • Ultra-fast: Sub-millisecond latency through in-memory processing
  • Rich data structures: Goes beyond simple Key-Value with diverse data types
  • Scalability: Horizontal scaling through clustering
  • Persistence: Data protection via RDB and AOF
  • Ease of use: Simple API and intuitive commands
  • Community: Active development community and ecosystem
  • Multi-purpose: Cache, database, and message broker functionality
  • Redis Stack: Feature extension through modules

Cons

  • Memory usage: All data must fit in memory
  • Cost: High memory costs for large datasets
  • Single-threaded: Some heavy operations can cause blocking
  • Persistence overhead: Performance impact from persistence operations
  • Complex queries: Not suitable for SQL-like complex queries

Key Links

Code Examples

Installation & Setup

# Ubuntu/Debian
sudo apt update
sudo apt install redis-server redis-cli

# Red Hat/CentOS
sudo yum install redis redis-tools
sudo systemctl start redis
sudo systemctl enable redis

# macOS (Homebrew)
brew install redis
brew services start redis

# Docker
docker run --name redis-server -p 6379:6379 -d redis:7-alpine

# Redis Stack (All modules included)
docker run -p 6379:6379 -p 8001:8001 redis/redis-stack:latest

# Redis CLI connection
redis-cli
redis-cli -h localhost -p 6379

Basic Operations (Data Types)

# String operations
SET mykey "Hello Redis"
GET mykey
INCR counter
INCRBY counter 10
MSET key1 "value1" key2 "value2"
MGET key1 key2

# Hash operations
HSET user:1 name "John Doe" email "[email protected]" age 30
HGET user:1 name
HGETALL user:1
HINCRBY user:1 age 1

# List operations
LPUSH mylist "item1" "item2" "item3"
RPUSH mylist "item4"
LRANGE mylist 0 -1
LPOP mylist
LLEN mylist

# Set operations
SADD myset "apple" "banana" "orange"
SMEMBERS myset
SISMEMBER myset "apple"
SCARD myset
SINTER set1 set2

# Sorted Set operations
ZADD ranking 100 "Alice" 200 "Bob" 150 "Charlie"
ZRANGE ranking 0 -1 WITHSCORES
ZREVRANGE ranking 0 2
ZRANK ranking "Bob"

Data Structures & Advanced Features

# TTL (Time To Live) settings
SET temp_key "temporary_value" EX 3600  # Expires in 1 hour
EXPIRE mykey 300  # Expires in 5 minutes
TTL mykey  # Check remaining time

# Bitmap operations
SETBIT user_active 123 1  # Set user 123 as active
GETBIT user_active 123
BITCOUNT user_active  # Count active users

# HyperLogLog (Cardinality estimation)
PFADD unique_visitors "user1" "user2" "user3"
PFCOUNT unique_visitors
PFMERGE merged_visitors visitors_day1 visitors_day2

# Geospatial operations
GEOADD locations 139.6917 35.6895 "Tokyo" 135.5023 34.6937 "Osaka"
GEODIST locations Tokyo Osaka km
GEORADIUS locations 139.6917 35.6895 100 km

# Streams (Log streaming)
XADD mystream * user "alice" action "login" timestamp 1640995200
XRANGE mystream - +
XREAD COUNT 2 STREAMS mystream 0

Redis Stack Features

# JSON operations (RedisJSON)
JSON.SET user:1001 $ '{"name":"John Doe","age":30,"skills":["Python","Redis"]}'
JSON.GET user:1001
JSON.SET user:1001 $.age 31
JSON.ARRAPPEND user:1001 $.skills '"JavaScript"'

# Search & Indexing (RediSearch)
FT.CREATE idx:users ON JSON PREFIX 1 user: SCHEMA $.name AS name TEXT $.age AS age NUMERIC
FT.SEARCH idx:users "@name:John*"
FT.SEARCH idx:users "@age:[25 35]"

# Graph Database (RedisGraph)
GRAPH.QUERY social "CREATE (:Person {name:'Alice', age:30})-[:KNOWS]->(:Person {name:'Bob', age:25})"
GRAPH.QUERY social "MATCH (p:Person) RETURN p.name, p.age"

# Time Series Data (RedisTimeSeries)
TS.CREATE temperature:room1 RETENTION 86400000 LABELS room 1 sensor temp
TS.ADD temperature:room1 * 23.5
TS.RANGE temperature:room1 - +

Optimization & Performance

# Memory usage monitoring
INFO memory
MEMORY USAGE mykey

# Performance monitoring
INFO stats
SLOWLOG GET 10  # Check slow queries
MONITOR  # Real-time command monitoring

# Configuration optimization
CONFIG GET maxmemory
CONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru

# Pipeline processing (Redis CLI)
redis-cli --pipe < commands.txt

# Lua scripts (Atomic operations)
EVAL "
local current = redis.call('GET', KEYS[1])
if current == false then
    redis.call('SET', KEYS[1], ARGV[1])
    return 1
else
    return 0
end
" 1 mykey "new_value"

Production Use Cases

# Replication setup (Master)
# redis.conf
bind 0.0.0.0
protected-mode no
port 6379

# Replication setup (Replica)
# redis.conf
replicaof 192.168.1.100 6379

# Cluster setup
redis-cli --cluster create \
  192.168.1.100:7000 192.168.1.100:7001 192.168.1.100:7002 \
  192.168.1.101:7000 192.168.1.101:7001 192.168.1.101:7002 \
  --cluster-replicas 1

# Persistence configuration (redis.conf)
# RDB settings
save 900 1    # Save RDB if at least 1 change in 900 seconds
save 300 10   # Save RDB if at least 10 changes in 300 seconds
save 60 10000 # Save RDB if at least 10000 changes in 60 seconds

# AOF settings
appendonly yes
appendfsync everysec
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# Security settings
requirepass mypassword
rename-command FLUSHDB ""
rename-command FLUSHALL ""

# Backup & Recovery
BGSAVE  # Create RDB file in background
BGREWRITEAOF  # Optimize AOF file