Garnet
Next-generation cache-store from Microsoft Research. Built with .NET C#, supports Redis-compatible RESP protocol. Achieves extremely low latency (99.9% under 300μs).
Garnet
Garnet is a high-performance Redis-compatible distributed key-value store developed by Microsoft Research. Built on the Tsavorite storage engine, it delivers exceptional throughput and low latency while maintaining full Redis compatibility, representing the next-generation cache store released as open source in 2024.
Overview
Garnet is an innovative cache server announced by Microsoft Research in 2024, achieving significant improvements in performance and scalability while maintaining complete compatibility with existing Redis clients. Built on the latest .NET technology, it features cross-platform support, extensibility, and modern architecture. Already in production use within Microsoft services such as Azure Resource Manager, Garnet demonstrates overwhelming performance improvements compared to Redis-family cache servers like Redis, KeyDB, and Dragonfly.
Features
Key Capabilities
- Redis Compatibility: Full compatibility with existing Redis clients through RESP (Redis Serialization Protocol) adoption
- High Performance: Massive throughput improvements over Redis with sub-300 microsecond latency at 99.9th percentile
- Tsavorite Storage Engine: Optimization through Microsoft's high-performance storage engine
- Clustering Support: Sharding, replication, and dynamic key migration capabilities
- .NET Integration: Efficient memory management and garbage collection optimization through latest .NET technology
- Enterprise Features: Persistence, recovery, automatic failover, and authentication systems
Architecture Characteristics
Garnet's technical advantages:
- Layered Architecture: Clear separation of responsibilities through 5-layer structure: Server, Protocol, API, Storage, and Cluster
- Dual Storage System: Optimization of Main Store (simple KV operations) and Object Store (complex data structures)
- AOF Persistence: High-speed and reliable data persistence through Append-Only File
- Efficient Clustering: Data sharding through Redis hash slots and dynamic rebalancing
Performance Characteristics
- Over 10x throughput improvement in GET commands compared to Dragonfly
- Superior scalability under concurrent connections compared to Redis and KeyDB
- Sub-300 microsecond latency at 99.9th percentile
- Cost savings for large-scale applications through efficient processing of small batch sizes
Pros and Cons
Advantages
- Exceptional Performance: Performance metrics that overwhelm Redis-family servers in benchmarks
- Redis Compatibility: Easy migration of existing applications with low learning curve
- Modern Architecture: Efficient resource utilization through latest .NET technology
- Enterprise Ready: Clustering, replication, and failover capabilities
- Open Source: Free usage and modification under MIT license
- Microsoft-backed: Proven reliability in Azure and continuous development support
Disadvantages
- Limited Commands: Only supports about one-third of Redis commands (not fully compatible)
- No Lua Scripting: Lack of Lua support used by many .NET cache/session management libraries
- New Technology: Limited long-term operational experience and ecosystem due to 2024 release
- .NET Dependency: Optimized for .NET environments with limited benefits in other environments
- Documentation Gaps: Limited information and community support due to being a new project
- Learning Curve: Additional learning required to understand subtle differences from Redis
References
Code Examples
Starting Garnet Server
# Download and run Garnet binary
# Download releases from GitHub
wget https://github.com/microsoft/garnet/releases/latest/download/garnet-linux-x64.zip
unzip garnet-linux-x64.zip
# Basic startup
./GarnetServer
# Custom configuration startup
./GarnetServer --port 6379 --memory 4g --logdir ./logs --aof
# Cluster mode startup
./GarnetServer --cluster --port 7000 --memory 2g
# Startup with configuration file
./GarnetServer --config-file garnet.conf
Docker Execution
# Docker container execution
docker run --rm -p 6379:6379 microsoft/garnet
# Docker execution with custom configuration
docker run --rm -p 6379:6379 \
-v $(pwd)/data:/data \
microsoft/garnet \
--memory 2g --aof --logdir /data
# Docker execution with cluster configuration
docker run --rm -p 7000:7000 \
--name garnet-node1 \
microsoft/garnet \
--cluster --port 7000 --announce-ip 127.0.0.1
docker run --rm -p 7001:7001 \
--name garnet-node2 \
microsoft/garnet \
--cluster --port 7001 --announce-ip 127.0.0.1
# Join cluster
docker exec garnet-node2 redis-cli -p 7001 cluster meet 127.0.0.1 7000
C#/.NET Client Basic Operations
using StackExchange.Redis;
using System;
using System.Threading.Tasks;
public class GarnetClientExample
{
private static IDatabase database;
public static async Task Main(string[] args)
{
// Connect to Garnet server (Redis compatible)
var redis = ConnectionMultiplexer.Connect("localhost:6379");
database = redis.GetDatabase();
Console.WriteLine("=== Garnet Basic Operations ===");
// String operations
await database.StringSetAsync("user:1000:name", "John Doe");
await database.StringSetAsync("user:1000:email", "[email protected]");
await database.StringSetAsync("user:1000:age", "30");
string name = await database.StringGetAsync("user:1000:name");
string email = await database.StringGetAsync("user:1000:email");
int age = await database.StringGetAsync("user:1000:age");
Console.WriteLine($"User: {name}, Email: {email}, Age: {age}");
// Keys with expiration
await database.StringSetAsync("session:abc123", "active", TimeSpan.FromMinutes(30));
var ttl = await database.KeyTimeToLiveAsync("session:abc123");
Console.WriteLine($"Session TTL: {ttl?.TotalSeconds} seconds");
// Hash operations
await database.HashSetAsync("product:2000", new HashEntry[]
{
new("name", "Gaming Laptop"),
new("price", "1299.99"),
new("category", "Electronics"),
new("stock", "50")
});
var productName = await database.HashGetAsync("product:2000", "name");
var productPrice = await database.HashGetAsync("product:2000", "price");
Console.WriteLine($"Product: {productName}, Price: ${productPrice}");
// List operations
await database.ListLeftPushAsync("task_queue", "process_payment");
await database.ListLeftPushAsync("task_queue", "send_email");
await database.ListLeftPushAsync("task_queue", "update_inventory");
var queueLength = await database.ListLengthAsync("task_queue");
Console.WriteLine($"Queue length: {queueLength}");
string task = await database.ListRightPopAsync("task_queue");
Console.WriteLine($"Processing task: {task}");
// Set operations
await database.SetAddAsync("user:1000:interests", "technology");
await database.SetAddAsync("user:1000:interests", "gaming");
await database.SetAddAsync("user:1000:interests", "programming");
var interests = await database.SetMembersAsync("user:1000:interests");
Console.WriteLine($"User interests: {string.Join(", ", interests)}");
// Concurrent access test
await PerformanceBenchmark();
redis.Close();
}
private static async Task PerformanceBenchmark()
{
Console.WriteLine("\n=== Performance Benchmark ===");
var tasks = new List<Task>();
int operations = 10000;
var stopwatch = System.Diagnostics.Stopwatch.StartNew();
// Concurrent write test
for (int i = 0; i < operations; i++)
{
int taskId = i;
tasks.Add(Task.Run(async () =>
{
await database.StringSetAsync($"benchmark:write:{taskId}", $"value_{taskId}");
}));
}
await Task.WhenAll(tasks);
stopwatch.Stop();
Console.WriteLine($"Write operations: {operations}");
Console.WriteLine($"Total time: {stopwatch.ElapsedMilliseconds} ms");
Console.WriteLine($"Ops/sec: {operations * 1000.0 / stopwatch.ElapsedMilliseconds:F2}");
// Concurrent read test
tasks.Clear();
stopwatch.Restart();
for (int i = 0; i < operations; i++)
{
int taskId = i;
tasks.Add(Task.Run(async () =>
{
await database.StringGetAsync($"benchmark:write:{taskId}");
}));
}
await Task.WhenAll(tasks);
stopwatch.Stop();
Console.WriteLine($"Read operations: {operations}");
Console.WriteLine($"Total time: {stopwatch.ElapsedMilliseconds} ms");
Console.WriteLine($"Ops/sec: {operations * 1000.0 / stopwatch.ElapsedMilliseconds:F2}");
}
}
Python (redis-py) Client Usage
import redis
import time
import threading
from concurrent.futures import ThreadPoolExecutor
def connect_to_garnet():
"""Connect to Garnet server"""
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Connection test
try:
r.ping()
print("Connected to Garnet server successfully!")
return r
except redis.ConnectionError:
print("Failed to connect to Garnet server")
return None
def basic_operations(r):
"""Demo of basic Redis-compatible operations"""
print("\n=== Basic Operations Demo ===")
# String operations
r.set('greeting', 'Hello, Garnet!')
print(f"Greeting: {r.get('greeting')}")
# Numeric operations
r.set('counter', 100)
r.incr('counter', 5)
print(f"Counter: {r.get('counter')}")
# List operations
r.lpush('notifications', 'Welcome!', 'New message', 'System update')
notifications = r.lrange('notifications', 0, -1)
print(f"Notifications: {notifications}")
# Hash operations
user_data = {
'id': '12345',
'name': 'Alice Johnson',
'role': 'developer',
'status': 'active'
}
r.hset('user:12345', mapping=user_data)
user_name = r.hget('user:12345', 'name')
user_role = r.hget('user:12345', 'role')
print(f"User: {user_name} ({user_role})")
# Set operations
r.sadd('active_users', 'user1', 'user2', 'user3', 'user4')
r.sadd('premium_users', 'user2', 'user4', 'user5')
active_count = r.scard('active_users')
premium_active = r.sinter('active_users', 'premium_users')
print(f"Active users: {active_count}, Premium active: {list(premium_active)}")
def performance_test(r):
"""Garnet performance test"""
print("\n=== Performance Test ===")
# Sequential execution test
operations = 10000
start_time = time.time()
for i in range(operations):
r.set(f'perf:seq:{i}', f'value_{i}')
sequential_time = time.time() - start_time
print(f"Sequential SET operations: {operations}")
print(f"Time: {sequential_time:.3f} seconds")
print(f"Ops/sec: {operations / sequential_time:.2f}")
# Concurrent execution test
def concurrent_operations(thread_id, ops_per_thread):
local_r = redis.Redis(host='localhost', port=6379, decode_responses=True)
for i in range(ops_per_thread):
local_r.set(f'perf:conc:{thread_id}:{i}', f'value_{thread_id}_{i}')
threads = 10
ops_per_thread = 1000
start_time = time.time()
with ThreadPoolExecutor(max_workers=threads) as executor:
futures = [executor.submit(concurrent_operations, i, ops_per_thread)
for i in range(threads)]
for future in futures:
future.result()
concurrent_time = time.time() - start_time
total_ops = threads * ops_per_thread
print(f"Concurrent SET operations: {total_ops}")
print(f"Threads: {threads}")
print(f"Time: {concurrent_time:.3f} seconds")
print(f"Ops/sec: {total_ops / concurrent_time:.2f}")
def pipeline_operations(r):
"""Efficiency through pipeline operations"""
print("\n=== Pipeline Operations ===")
# Normal operations
start_time = time.time()
for i in range(1000):
r.set(f'normal:{i}', f'value_{i}')
normal_time = time.time() - start_time
# Pipeline operations
start_time = time.time()
pipeline = r.pipeline()
for i in range(1000):
pipeline.set(f'pipeline:{i}', f'value_{i}')
pipeline.execute()
pipeline_time = time.time() - start_time
print(f"Normal operations time: {normal_time:.3f} seconds")
print(f"Pipeline operations time: {pipeline_time:.3f} seconds")
print(f"Pipeline speedup: {normal_time / pipeline_time:.2f}x")
def session_management_example(r):
"""Session management example"""
print("\n=== Session Management Example ===")
session_id = "sess_abc123"
user_id = "user_789"
# Save session data (30-minute expiration)
session_data = {
'user_id': user_id,
'login_time': str(int(time.time())),
'ip_address': '192.168.1.100',
'user_agent': 'Mozilla/5.0 Browser'
}
pipeline = r.pipeline()
pipeline.hset(f'session:{session_id}', mapping=session_data)
pipeline.expire(f'session:{session_id}', 1800) # 30 minutes
pipeline.execute()
# Add to active sessions list
r.sadd(f'user:{user_id}:sessions', session_id)
# Session validation
if r.exists(f'session:{session_id}'):
stored_user = r.hget(f'session:{session_id}', 'user_id')
login_time = r.hget(f'session:{session_id}', 'login_time')
ttl = r.ttl(f'session:{session_id}')
print(f"Valid session for user {stored_user}")
print(f"Login time: {login_time}")
print(f"Expires in: {ttl} seconds")
# Session cleanup
r.delete(f'session:{session_id}')
r.srem(f'user:{user_id}:sessions', session_id)
def main():
# Connect to Garnet server
r = connect_to_garnet()
if not r:
return
try:
# Demo various operations
basic_operations(r)
performance_test(r)
pipeline_operations(r)
session_management_example(r)
except Exception as e:
print(f"Error: {e}")
finally:
r.close()
if __name__ == "__main__":
main()
Node.js (ioredis) Client Usage
const Redis = require('ioredis');
class GarnetClient {
constructor() {
this.redis = new Redis({
host: 'localhost',
port: 6379,
retryDelayOnFailover: 100,
enableReadyCheck: false,
maxRetriesPerRequest: null,
});
this.redis.on('connect', () => {
console.log('Connected to Garnet server');
});
this.redis.on('error', (err) => {
console.error('Garnet connection error:', err);
});
}
async basicOperations() {
console.log('\n=== Basic Operations Demo ===');
// String operations
await this.redis.set('app:version', '2.1.0');
await this.redis.setex('temp:token', 300, 'abc123xyz'); // 5-minute expiration
const version = await this.redis.get('app:version');
const token = await this.redis.get('temp:token');
const ttl = await this.redis.ttl('temp:token');
console.log(`App version: ${version}`);
console.log(`Temp token: ${token} (expires in ${ttl}s)`);
// Hash operations
const orderData = {
id: 'order_12345',
customer: 'John Smith',
amount: '299.99',
status: 'processing',
created: new Date().toISOString()
};
await this.redis.hmset('order:12345', orderData);
const order = await this.redis.hgetall('order:12345');
console.log('Order data:', order);
// List operations - task queue
await this.redis.lpush('task:email',
JSON.stringify({ type: 'welcome', userId: 'user_123' }),
JSON.stringify({ type: 'order_confirmation', orderId: 'order_12345' }),
JSON.stringify({ type: 'newsletter', segment: 'premium' })
);
const queueLength = await this.redis.llen('task:email');
console.log(`Email queue length: ${queueLength}`);
// Task processing simulation
const task = await this.redis.rpop('task:email');
if (task) {
const taskData = JSON.parse(task);
console.log('Processing task:', taskData);
}
// Set operations - tag management
await this.redis.sadd('post:100:tags', 'javascript', 'redis', 'cache', 'performance');
await this.redis.sadd('post:101:tags', 'python', 'redis', 'tutorial');
const commonTags = await this.redis.sinter('post:100:tags', 'post:101:tags');
console.log('Common tags:', commonTags);
}
async performanceBenchmark() {
console.log('\n=== Performance Benchmark ===');
const operations = 10000;
// Single client performance test
console.log('Single client benchmark...');
const startTime = Date.now();
const pipeline = this.redis.pipeline();
for (let i = 0; i < operations; i++) {
pipeline.set(`bench:single:${i}`, `value_${i}`);
}
await pipeline.exec();
const singleClientTime = Date.now() - startTime;
console.log(`Single client: ${operations} operations in ${singleClientTime}ms`);
console.log(`Ops/sec: ${(operations * 1000 / singleClientTime).toFixed(2)}`);
// Multi-client performance test
console.log('Multi client benchmark...');
const clients = 10;
const opsPerClient = Math.floor(operations / clients);
const multiStartTime = Date.now();
const promises = [];
for (let c = 0; c < clients; c++) {
const client = new Redis({ host: 'localhost', port: 6379 });
const promise = (async (clientId) => {
const clientPipeline = client.pipeline();
for (let i = 0; i < opsPerClient; i++) {
clientPipeline.set(`bench:multi:${clientId}:${i}`, `value_${clientId}_${i}`);
}
await clientPipeline.exec();
client.disconnect();
})(c);
promises.push(promise);
}
await Promise.all(promises);
const multiClientTime = Date.now() - multiStartTime;
const totalOps = clients * opsPerClient;
console.log(`Multi client: ${totalOps} operations in ${multiClientTime}ms`);
console.log(`Ops/sec: ${(totalOps * 1000 / multiClientTime).toFixed(2)}`);
console.log(`Speedup: ${(singleClientTime / multiClientTime).toFixed(2)}x`);
}
async cachePattern() {
console.log('\n=== Cache Pattern Demo ===');
// Cache-aside pattern
const cacheKey = 'user:profile:456';
// 1. Check cache first
let userData = await this.redis.get(cacheKey);
if (userData) {
console.log('Cache hit:', JSON.parse(userData));
} else {
console.log('Cache miss, fetching from database...');
// 2. Fetch from database (simulation)
const dbData = {
id: 456,
name: 'Sarah Connor',
email: '[email protected]',
preferences: {
theme: 'dark',
notifications: true
},
lastLogin: new Date().toISOString()
};
// 3. Save to cache (1-hour expiration)
await this.redis.setex(cacheKey, 3600, JSON.stringify(dbData));
console.log('Data cached:', dbData);
}
// Write-through pattern
const updateData = {
id: 456,
name: 'Sarah Connor',
email: '[email protected]',
preferences: {
theme: 'light', // Theme change
notifications: true
},
lastLogin: new Date().toISOString()
};
// Database update (simulation) and cache update
await this.redis.setex(cacheKey, 3600, JSON.stringify(updateData));
console.log('Cache updated:', updateData);
}
async leaderboardExample() {
console.log('\n=== Leaderboard Example ===');
// Game score leaderboard
const scores = [
['player1', 1250],
['player2', 980],
['player3', 1100],
['player4', 1350],
['player5', 870],
];
// Add scores
for (const [player, score] of scores) {
await this.redis.zadd('leaderboard:game1', score, player);
}
// Get top 3
const top3 = await this.redis.zrevrange('leaderboard:game1', 0, 2, 'WITHSCORES');
console.log('Top 3 players:');
for (let i = 0; i < top3.length; i += 2) {
const rank = (i / 2) + 1;
console.log(`${rank}. ${top3[i]} - ${top3[i + 1]} points`);
}
// Specific player rank
const player1Rank = await this.redis.zrevrank('leaderboard:game1', 'player1');
const player1Score = await this.redis.zscore('leaderboard:game1', 'player1');
console.log(`player1 rank: ${player1Rank + 1}, score: ${player1Score}`);
}
async disconnect() {
await this.redis.quit();
console.log('Disconnected from Garnet server');
}
}
async function main() {
const client = new GarnetClient();
try {
await client.basicOperations();
await client.performanceBenchmark();
await client.cachePattern();
await client.leaderboardExample();
} catch (error) {
console.error('Error:', error);
} finally {
await client.disconnect();
}
}
// Call main function when script is executed
if (require.main === module) {
main().catch(console.error);
}
module.exports = GarnetClient;
Cluster Configuration Example
# garnet.conf - Garnet cluster configuration file
# Basic configuration
port 7000
bind 0.0.0.0
# Memory configuration
memory 2g
page-size 4KB
segment-size 1g
# Cluster configuration
cluster yes
cluster-announce-ip 127.0.0.1
cluster-announce-port 7000
# Persistence configuration
aof yes
log-dir ./logs
# Authentication configuration
auth-mode password
auth-password your-secure-password
# Performance tuning
index-size 64m
object-store-memory 512m
# Cluster startup script
#!/bin/bash
# Start 3-node cluster
./GarnetServer --config-file node1.conf --port 7000 --cluster-announce-port 7000 &
./GarnetServer --config-file node2.conf --port 7001 --cluster-announce-port 7001 &
./GarnetServer --config-file node3.conf --port 7002 --cluster-announce-port 7002 &
# Wait for node startup
sleep 5
# Configure cluster
redis-cli -p 7000 cluster meet 127.0.0.1 7001
redis-cli -p 7000 cluster meet 127.0.0.1 7002
# Assign hash slots
redis-cli -p 7000 cluster addslots {0..5461}
redis-cli -p 7001 cluster addslots {5462..10922}
redis-cli -p 7002 cluster addslots {10923..16383}
echo "Garnet cluster started successfully!"
echo "Node 1: 127.0.0.1:7000"
echo "Node 2: 127.0.0.1:7001"
echo "Node 3: 127.0.0.1:7002"
Garnet serves as an innovative cache server that delivers overwhelming performance improvements while maintaining Redis compatibility, providing significant value particularly for high-performance application development in .NET environments.