Complete Guide to Microservices Architecture
Complete Guide to Microservices Architecture
Overview
Microservices architecture is an architectural pattern that structures applications as a collection of independently deployable, loosely coupled services. This comprehensive guide covers microservices design principles, service decomposition strategies, data management, and DevOps implementation.
In modern cloud-native application development, microservices architecture is gaining attention for the following reasons:
- Independent Deployment: Each service can be released and updated individually
- Technology Diversity: Optimal technology choices per service
- Team Autonomy: Small teams can develop and operate independently
- Fault Isolation: Single service failures don't cascade system-wide
- Horizontal Scalability: Scale specific services based on demand
Details
Core Principles of Microservices
1. Business Capability-Based Decomposition
Microservices should be decomposed based on business capabilities, not technical layers. This aligns closely with Domain-Driven Design (DDD) bounded contexts.
Good Decomposition Examples:
- User Management Service
- Order Processing Service
- Payment Service
- Inventory Management Service
Poor Decomposition Examples:
- Database Layer Service
- UI Layer Service
- Business Logic Layer Service
2. Data Independence
Each microservice should own its database and never access another service's database directly. This ensures loose coupling and service independence.
3. Distributed Message Bus
Inter-service communication should avoid chatty direct calls and adopt event-driven architecture patterns.
Key Design Patterns
API Gateway Pattern
The API Gateway acts as a single entry point for all client requests, routing them to appropriate microservices.
Capabilities:
- Authentication & Authorization
- Request Routing
- Rate Limiting
- Load Balancing
- Logging & Monitoring
- Response Transformation
Database per Service Pattern
Each microservice maintains its own database, ensuring data independence.
Benefits:
- Loose coupling between services
- Technology stack flexibility
- Independent scaling
- Fault isolation
Saga Pattern
Ensures data consistency across multiple services in distributed transactions.
Implementation Approaches:
- Choreography: Event-based distributed execution
- Orchestration: Centralized execution order management
Circuit Breaker Pattern
Prevents cascading failures when inter-service communication fails.
State Transitions:
- Closed: Normal state, all requests pass through
- Open: Failure state, all requests are blocked
- Half-Open: Recovery verification state, partial requests for testing
Bulkhead Pattern
Isolates different parts of the system to prevent one component's failure from affecting others.
Service Decomposition Strategies
Domain-Driven Design (DDD) Approach
Bounded Context Identification:
E-commerce Domain Example:
┌─────────────────┐ ┌─────────────────┐
│ User Management│ │ Product Catalog │
│ - Authentication│ │ - Product Info │
│ - Profile Mgmt │ │ - Categories │
│ - Authorization │ │ - Inventory │
└─────────────────┘ └─────────────────┘
┌─────────────────┐ ┌─────────────────┐
│ Order Management│ │ Payment Mgmt │
│ - Order Creation│ │ - Payment Proc │
│ - Order History │ │ - Refund Proc │
│ - Status Updates│ │ - Billing Mgmt │
└─────────────────┘ └─────────────────┘
Data Decomposition Strategy
Decomposition from Normalization:
-- Monolithic Design
CREATE TABLE orders (
id BIGINT PRIMARY KEY,
user_id BIGINT,
product_id BIGINT,
payment_id BIGINT,
shipping_address TEXT,
status VARCHAR(50),
created_at TIMESTAMP
);
-- Post-Microservices Decomposition
-- Order Service
CREATE TABLE orders (
id BIGINT PRIMARY KEY,
user_id BIGINT,
status VARCHAR(50),
created_at TIMESTAMP
);
-- Order Items Service
CREATE TABLE order_items (
id BIGINT PRIMARY KEY,
order_id BIGINT,
product_id BIGINT,
quantity INTEGER,
price DECIMAL
);
Communication Patterns and Protocols
Synchronous Communication
REST API:
// Synchronous call from Order Service to Inventory Service
const checkInventory = async (productId, quantity) => {
try {
const response = await fetch(
`${INVENTORY_SERVICE_URL}/inventory/${productId}/check`,
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ quantity })
}
);
return await response.json();
} catch (error) {
// Circuit Breaker pattern implementation
throw new ServiceUnavailableError('Inventory service unavailable');
}
};
gRPC:
// inventory.proto
syntax = "proto3";
service InventoryService {
rpc CheckAvailability(InventoryRequest) returns (InventoryResponse);
rpc ReserveItems(ReservationRequest) returns (ReservationResponse);
}
message InventoryRequest {
string product_id = 1;
int32 quantity = 2;
}
message InventoryResponse {
bool available = 1;
int32 current_stock = 2;
}
Asynchronous Communication
Message Queue (RabbitMQ):
const amqp = require('amqplib');
// Event Publisher (Order Service)
const publishOrderCreated = async (orderData) => {
const connection = await amqp.connect('amqp://localhost');
const channel = await connection.createChannel();
const exchangeName = 'order.events';
await channel.assertExchange(exchangeName, 'topic');
const routingKey = 'order.created';
const message = JSON.stringify({
orderId: orderData.id,
userId: orderData.userId,
items: orderData.items,
timestamp: new Date().toISOString()
});
await channel.publish(exchangeName, routingKey, Buffer.from(message));
await connection.close();
};
// Event Subscriber (Inventory Service)
const subscribeToOrderEvents = async () => {
const connection = await amqp.connect('amqp://localhost');
const channel = await connection.createChannel();
const exchangeName = 'order.events';
const queueName = 'inventory.order.queue';
await channel.assertExchange(exchangeName, 'topic');
await channel.assertQueue(queueName);
await channel.bindQueue(queueName, exchangeName, 'order.created');
await channel.consume(queueName, async (message) => {
const orderData = JSON.parse(message.content.toString());
await reserveInventory(orderData);
channel.ack(message);
});
};
Apache Kafka:
const kafka = require('kafkajs');
const client = kafka({
clientId: 'order-service',
brokers: ['localhost:9092']
});
// Producer
const producer = client.producer();
await producer.send({
topic: 'order-events',
messages: [{
partition: 0,
key: `order-${orderId}`,
value: JSON.stringify({
eventType: 'ORDER_CREATED',
orderId,
userId,
items,
timestamp: Date.now()
})
}]
});
// Consumer
const consumer = client.consumer({ groupId: 'inventory-service-group' });
await consumer.subscribe({ topic: 'order-events' });
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
const event = JSON.parse(message.value.toString());
if (event.eventType === 'ORDER_CREATED') {
await handleOrderCreated(event);
}
}
});
Data Management Strategies
Event Sourcing
// Event Store Implementation
class EventStore {
constructor(database) {
this.db = database;
}
async saveEvents(aggregateId, events, expectedVersion) {
const transaction = await this.db.beginTransaction();
try {
for (const event of events) {
await transaction.query(`
INSERT INTO events (aggregate_id, event_type, event_data, version)
VALUES (?, ?, ?, ?)
`, [aggregateId, event.type, JSON.stringify(event.data), expectedVersion++]);
}
await transaction.commit();
} catch (error) {
await transaction.rollback();
throw error;
}
}
async getEvents(aggregateId, fromVersion = 0) {
const result = await this.db.query(`
SELECT event_type, event_data, version
FROM events
WHERE aggregate_id = ? AND version > ?
ORDER BY version
`, [aggregateId, fromVersion]);
return result.map(row => ({
type: row.event_type,
data: JSON.parse(row.event_data),
version: row.version
}));
}
}
// Aggregate Example
class Order {
constructor() {
this.id = null;
this.items = [];
this.status = 'PENDING';
this.uncommittedEvents = [];
}
static fromEvents(events) {
const order = new Order();
events.forEach(event => order.apply(event));
return order;
}
createOrder(orderId, userId, items) {
this.raiseEvent({
type: 'ORDER_CREATED',
data: { orderId, userId, items }
});
}
addItem(productId, quantity, price) {
this.raiseEvent({
type: 'ITEM_ADDED',
data: { productId, quantity, price }
});
}
apply(event) {
switch (event.type) {
case 'ORDER_CREATED':
this.id = event.data.orderId;
this.userId = event.data.userId;
break;
case 'ITEM_ADDED':
this.items.push(event.data);
break;
}
}
raiseEvent(event) {
this.apply(event);
this.uncommittedEvents.push(event);
}
}
CQRS (Command Query Responsibility Segregation)
// Command Side (Write)
class OrderCommandHandler {
constructor(eventStore, orderRepository) {
this.eventStore = eventStore;
this.orderRepository = orderRepository;
}
async handle(command) {
switch (command.type) {
case 'CREATE_ORDER':
return await this.handleCreateOrder(command);
case 'ADD_ITEM':
return await this.handleAddItem(command);
}
}
async handleCreateOrder(command) {
const order = new Order();
order.createOrder(command.orderId, command.userId, command.items);
await this.eventStore.saveEvents(
command.orderId,
order.uncommittedEvents,
0
);
return { success: true, orderId: command.orderId };
}
}
// Query Side (Read)
class OrderQueryHandler {
constructor(readDatabase) {
this.db = readDatabase;
}
async getOrderById(orderId) {
const result = await this.db.query(`
SELECT o.*, oi.product_id, oi.quantity, oi.price
FROM orders_view o
LEFT JOIN order_items_view oi ON o.id = oi.order_id
WHERE o.id = ?
`, [orderId]);
return this.mapToOrderDto(result);
}
async getOrdersByUser(userId, page = 1, limit = 10) {
const offset = (page - 1) * limit;
const result = await this.db.query(`
SELECT * FROM orders_view
WHERE user_id = ?
ORDER BY created_at DESC
LIMIT ? OFFSET ?
`, [userId, limit, offset]);
return result.map(row => this.mapToOrderSummaryDto(row));
}
}
// Event Handler (Read View Updates)
class OrderViewUpdater {
constructor(database) {
this.db = database;
}
async on(event) {
switch (event.type) {
case 'ORDER_CREATED':
await this.createOrderView(event);
break;
case 'ITEM_ADDED':
await this.addItemToView(event);
break;
}
}
async createOrderView(event) {
await this.db.query(`
INSERT INTO orders_view (id, user_id, status, created_at)
VALUES (?, ?, 'PENDING', NOW())
`, [event.data.orderId, event.data.userId]);
}
}
Infrastructure and Deployment
Docker Containerization
Multi-stage Build Example:
# Build Stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
# Production Stage
FROM node:18-alpine AS production
WORKDIR /app
# Security: Create non-privileged user
RUN addgroup -g 1001 -S nodejs && \
adduser -S nextjs -u 1001
# Copy dependencies and application code
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=nextjs:nodejs . .
# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:3000/health || exit 1
USER nextjs
EXPOSE 3000
CMD ["npm", "start"]
Kubernetes Deployment
Service Definition:
# order-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
labels:
app: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: order-service:v1.2.0
ports:
- containerPort: 3000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: redis-config
key: url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
ports:
- port: 80
targetPort: 3000
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: order-service-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: api.example.com
http:
paths:
- path: /orders
pathType: Prefix
backend:
service:
name: order-service
port:
number: 80
Service Mesh (Istio)
VirtualService Configuration:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: order-service-vs
spec:
hosts:
- order-service
http:
- match:
- headers:
canary:
exact: "true"
route:
- destination:
host: order-service
subset: v2
weight: 100
- route:
- destination:
host: order-service
subset: v1
weight: 90
- destination:
host: order-service
subset: v2
weight: 10
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: order-service-dr
spec:
host: order-service
trafficPolicy:
circuitBreaker:
consecutiveErrors: 3
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Monitoring and Observability
Distributed Tracing (OpenTelemetry)
// tracing.js
const { NodeSDK } = require('@opentelemetry/sdk-node');
const { getNodeAutoInstrumentations } = require('@opentelemetry/auto-instrumentations-node');
const { JaegerExporter } = require('@opentelemetry/exporter-jaeger');
const jaegerExporter = new JaegerExporter({
endpoint: 'http://jaeger:14268/api/traces',
});
const sdk = new NodeSDK({
traceExporter: jaegerExporter,
instrumentations: [getNodeAutoInstrumentations()]
});
sdk.start();
// Custom span creation
const { trace } = require('@opentelemetry/api');
const tracer = trace.getTracer('order-service', '1.0.0');
const processOrder = async (orderData) => {
const span = tracer.startSpan('process_order');
try {
span.setAttributes({
'order.id': orderData.id,
'order.user_id': orderData.userId,
'order.items_count': orderData.items.length
});
// Business logic execution
const result = await executeOrderProcessing(orderData);
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
span.setStatus({
code: SpanStatusCode.ERROR,
message: error.message
});
throw error;
} finally {
span.end();
}
};
Metrics Collection (Prometheus)
// metrics.js
const promClient = require('prom-client');
// Custom metrics definition
const httpRequestDuration = new promClient.Histogram({
name: 'http_request_duration_seconds',
help: 'Duration of HTTP requests in seconds',
labelNames: ['method', 'route', 'status_code']
});
const orderProcessingCounter = new promClient.Counter({
name: 'orders_processed_total',
help: 'Total number of orders processed',
labelNames: ['status']
});
const activeConnectionsGauge = new promClient.Gauge({
name: 'active_connections',
help: 'Number of active connections'
});
// Express middleware
const metricsMiddleware = (req, res, next) => {
const startTime = Date.now();
res.on('finish', () => {
const duration = (Date.now() - startTime) / 1000;
httpRequestDuration
.labels(req.method, req.route?.path || req.path, res.statusCode)
.observe(duration);
});
next();
};
// Metrics endpoint
app.get('/metrics', async (req, res) => {
res.set('Content-Type', promClient.register.contentType);
res.end(await promClient.register.metrics());
});
Log Management (Structured Logging)
// logger.js
const winston = require('winston');
const logger = winston.createLogger({
level: 'info',
format: winston.format.combine(
winston.format.timestamp(),
winston.format.errors({ stack: true }),
winston.format.json()
),
defaultMeta: {
service: 'order-service',
version: process.env.APP_VERSION || '1.0.0'
},
transports: [
new winston.transports.Console({
format: winston.format.combine(
winston.format.colorize(),
winston.format.simple()
)
}),
new winston.transports.File({
filename: 'error.log',
level: 'error'
}),
new winston.transports.File({
filename: 'combined.log'
})
]
});
// Usage example
const createOrder = async (orderData) => {
const correlationId = generateCorrelationId();
logger.info('Order creation started', {
correlationId,
orderId: orderData.id,
userId: orderData.userId,
itemsCount: orderData.items.length
});
try {
const result = await processOrder(orderData);
logger.info('Order created successfully', {
correlationId,
orderId: result.id,
processingTime: result.processingTime
});
return result;
} catch (error) {
logger.error('Order creation failed', {
correlationId,
orderId: orderData.id,
error: error.message,
stack: error.stack
});
throw error;
}
};
Advantages and Disadvantages
Advantages
Development & Operations
- Independent Development: Teams can develop and deploy services independently
- Technology Diversity: Optimal technology stack selection per service
- Fault Tolerance: Single service failures don't cascade system-wide
- Scalability: Scale specific services based on demand
- Continuous Deployment: Frequent releases with small changes
Business Benefits
- Faster Time to Market: Independent releases per feature
- Team Autonomy: Quick response to business requirements
- Innovation Enablement: Easy adoption of new technologies
Disadvantages
Increased Complexity
- Distributed System Challenges: Network partitions, latency, data consistency
- Operational Overhead: Monitoring and managing numerous services
- Debugging Difficulties: Complex bug identification and resolution
Initial Investment
- Infrastructure: Building CI/CD, monitoring, log management systems
- Learning Curve: Team skill acquisition and cultural changes
- Tooling: Container, orchestration, service mesh adoption
Data Management Complexity
- Distributed Transactions: Difficult to maintain ACID properties
- Data Consistency: Understanding and implementing eventual consistency
- Query Complexity: Cross-service data retrieval challenges
Reference Pages
Official Documentation
- Microservices.io - Pattern: Microservice Architecture
- Spring Boot Documentation - Building Microservices
- Kubernetes Documentation - Service Mesh
Technical Resources
Books & Articles
- "Building Microservices" by Sam Newman
- "Microservices Patterns" by Chris Richardson
- Martin Fowler - Microservices Article
Tools & Platforms
Implementation Examples
1. Simple Microservice (Node.js + Express)
// package.json
{
"name": "user-service",
"version": "1.0.0",
"main": "index.js",
"dependencies": {
"express": "^4.18.0",
"mongoose": "^7.0.0",
"cors": "^2.8.5",
"helmet": "^6.0.0",
"dotenv": "^16.0.0",
"winston": "^3.8.0"
}
}
// index.js
require('dotenv').config();
const express = require('express');
const mongoose = require('mongoose');
const cors = require('cors');
const helmet = require('helmet');
const logger = require('./logger');
const app = express();
const PORT = process.env.PORT || 3000;
// Middleware setup
app.use(helmet());
app.use(cors());
app.use(express.json());
// Logging middleware
app.use((req, res, next) => {
logger.info(`${req.method} ${req.path}`, {
ip: req.ip,
userAgent: req.get('User-Agent')
});
next();
});
// MongoDB connection
mongoose.connect(process.env.MONGODB_URL, {
useNewUrlParser: true,
useUnifiedTopology: true
}).then(() => {
logger.info('MongoDB connected successfully');
}).catch(err => {
logger.error('MongoDB connection failed', { error: err.message });
process.exit(1);
});
// User model
const userSchema = new mongoose.Schema({
username: { type: String, required: true, unique: true },
email: { type: String, required: true, unique: true },
createdAt: { type: Date, default: Date.now }
});
const User = mongoose.model('User', userSchema);
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
timestamp: new Date().toISOString(),
service: 'user-service',
version: process.env.APP_VERSION || '1.0.0'
});
});
// Create user
app.post('/users', async (req, res) => {
try {
const { username, email } = req.body;
const user = new User({ username, email });
await user.save();
logger.info('User created', { userId: user._id, username });
res.status(201).json({
id: user._id,
username: user.username,
email: user.email,
createdAt: user.createdAt
});
} catch (error) {
logger.error('User creation failed', {
error: error.message,
body: req.body
});
if (error.code === 11000) {
res.status(409).json({ error: 'Username or email already exists' });
} else {
res.status(500).json({ error: 'Internal server error' });
}
}
});
// Get user
app.get('/users/:id', async (req, res) => {
try {
const user = await User.findById(req.params.id);
if (!user) {
return res.status(404).json({ error: 'User not found' });
}
res.json({
id: user._id,
username: user.username,
email: user.email,
createdAt: user.createdAt
});
} catch (error) {
logger.error('User retrieval failed', {
userId: req.params.id,
error: error.message
});
res.status(500).json({ error: 'Internal server error' });
}
});
// Start server
app.listen(PORT, () => {
logger.info(`User service running on port ${PORT}`);
});
// Graceful shutdown
process.on('SIGTERM', () => {
logger.info('SIGTERM received, shutting down gracefully');
mongoose.connection.close(() => {
logger.info('MongoDB connection closed');
process.exit(0);
});
});
2. API Gateway (Node.js + Express)
// api-gateway/index.js
const express = require('express');
const httpProxy = require('http-proxy-middleware');
const rateLimit = require('express-rate-limit');
const jwt = require('jsonwebtoken');
const logger = require('./logger');
const app = express();
const PORT = process.env.PORT || 3001;
// Rate limiting configuration
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // max 100 requests
message: 'Too many requests from this IP'
});
app.use(limiter);
app.use(express.json());
// Authentication middleware
const authenticateToken = (req, res, next) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (!token) {
return res.status(401).json({ error: 'Access token required' });
}
jwt.verify(token, process.env.JWT_SECRET, (err, user) => {
if (err) {
return res.status(403).json({ error: 'Invalid token' });
}
req.user = user;
next();
});
};
// Service configuration
const services = {
user: process.env.USER_SERVICE_URL || 'http://localhost:3000',
order: process.env.ORDER_SERVICE_URL || 'http://localhost:3002',
inventory: process.env.INVENTORY_SERVICE_URL || 'http://localhost:3003'
};
// Proxy configuration
const createProxyMiddleware = (target, pathRewrite = {}) => {
return httpProxy({
target,
changeOrigin: true,
pathRewrite,
onError: (err, req, res) => {
logger.error('Proxy error', {
target,
path: req.path,
error: err.message
});
res.status(502).json({ error: 'Service unavailable' });
},
onProxyReq: (proxyReq, req, res) => {
logger.info('Proxy request', {
target,
method: req.method,
path: req.path,
userId: req.user?.id
});
}
});
};
// Routing configuration
app.use('/api/users', authenticateToken, createProxyMiddleware(services.user, {
'^/api/users': '/users'
}));
app.use('/api/orders', authenticateToken, createProxyMiddleware(services.order, {
'^/api/orders': '/orders'
}));
app.use('/api/inventory', authenticateToken, createProxyMiddleware(services.inventory, {
'^/api/inventory': '/inventory'
}));
// Health check (no authentication required)
app.get('/health', (req, res) => {
res.json({
status: 'healthy',
services: Object.keys(services),
timestamp: new Date().toISOString()
});
});
app.listen(PORT, () => {
logger.info(`API Gateway running on port ${PORT}`);
});
3. Saga Pattern Implementation (Choreography)
// saga/order-saga.js
const EventEmitter = require('events');
class OrderSagaOrchestrator extends EventEmitter {
constructor(services) {
super();
this.services = services;
this.sagaState = new Map();
// Setup event listeners
this.setupEventListeners();
}
setupEventListeners() {
this.on('ORDER_CREATED', this.handleOrderCreated.bind(this));
this.on('INVENTORY_RESERVED', this.handleInventoryReserved.bind(this));
this.on('PAYMENT_PROCESSED', this.handlePaymentProcessed.bind(this));
this.on('SAGA_COMPLETED', this.handleSagaCompleted.bind(this));
// Failure cases
this.on('INVENTORY_RESERVATION_FAILED', this.handleInventoryReservationFailed.bind(this));
this.on('PAYMENT_FAILED', this.handlePaymentFailed.bind(this));
}
async startSaga(orderData) {
const sagaId = this.generateSagaId();
this.sagaState.set(sagaId, {
orderId: orderData.id,
status: 'STARTED',
steps: [],
createdAt: new Date()
});
try {
// Step 1: Create order
await this.services.orderService.createOrder(orderData);
this.updateSagaStep(sagaId, 'ORDER_CREATED', 'COMPLETED');
this.emit('ORDER_CREATED', { sagaId, orderData });
} catch (error) {
this.updateSagaStep(sagaId, 'ORDER_CREATED', 'FAILED');
await this.compensate(sagaId);
}
}
async handleOrderCreated(event) {
const { sagaId, orderData } = event;
try {
// Step 2: Reserve inventory
await this.services.inventoryService.reserveItems(orderData.items);
this.updateSagaStep(sagaId, 'INVENTORY_RESERVED', 'COMPLETED');
this.emit('INVENTORY_RESERVED', { sagaId, orderData });
} catch (error) {
this.updateSagaStep(sagaId, 'INVENTORY_RESERVED', 'FAILED');
this.emit('INVENTORY_RESERVATION_FAILED', { sagaId, orderData });
}
}
async handleInventoryReserved(event) {
const { sagaId, orderData } = event;
try {
// Step 3: Process payment
await this.services.paymentService.processPayment(orderData.payment);
this.updateSagaStep(sagaId, 'PAYMENT_PROCESSED', 'COMPLETED');
this.emit('PAYMENT_PROCESSED', { sagaId, orderData });
} catch (error) {
this.updateSagaStep(sagaId, 'PAYMENT_PROCESSED', 'FAILED');
this.emit('PAYMENT_FAILED', { sagaId, orderData });
}
}
async handlePaymentProcessed(event) {
const { sagaId, orderData } = event;
// Step 4: Confirm order
await this.services.orderService.confirmOrder(orderData.id);
this.updateSagaStep(sagaId, 'ORDER_CONFIRMED', 'COMPLETED');
this.emit('SAGA_COMPLETED', { sagaId, orderData });
}
async handleInventoryReservationFailed(event) {
const { sagaId, orderData } = event;
await this.compensate(sagaId);
}
async handlePaymentFailed(event) {
const { sagaId, orderData } = event;
await this.compensate(sagaId);
}
async compensate(sagaId) {
const saga = this.sagaState.get(sagaId);
const completedSteps = saga.steps.filter(step => step.status === 'COMPLETED');
// Execute compensation actions in reverse order
for (const step of completedSteps.reverse()) {
switch (step.name) {
case 'PAYMENT_PROCESSED':
await this.services.paymentService.refund(saga.orderId);
break;
case 'INVENTORY_RESERVED':
await this.services.inventoryService.releaseReservation(saga.orderId);
break;
case 'ORDER_CREATED':
await this.services.orderService.cancelOrder(saga.orderId);
break;
}
}
saga.status = 'COMPENSATED';
this.sagaState.set(sagaId, saga);
}
updateSagaStep(sagaId, stepName, status) {
const saga = this.sagaState.get(sagaId);
saga.steps.push({
name: stepName,
status: status,
timestamp: new Date()
});
this.sagaState.set(sagaId, saga);
}
generateSagaId() {
return `saga_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
}
}
module.exports = OrderSagaOrchestrator;
4. Circuit Breaker Pattern Implementation
// circuit-breaker.js
class CircuitBreaker {
constructor(service, options = {}) {
this.service = service;
this.failureThreshold = options.failureThreshold || 5;
this.recoveryTimeout = options.recoveryTimeout || 60000; // 60 seconds
this.monitoringPeriod = options.monitoringPeriod || 10000; // 10 seconds
this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
this.failureCount = 0;
this.nextAttempt = Date.now();
this.requestCount = 0;
this.successCount = 0;
}
async call(method, ...args) {
const currentTime = Date.now();
if (this.state === 'OPEN') {
if (currentTime < this.nextAttempt) {
throw new Error('Circuit breaker is OPEN');
} else {
this.state = 'HALF_OPEN';
this.failureCount = 0;
}
}
try {
const result = await this.service[method](...args);
this.onSuccess();
return result;
} catch (error) {
this.onFailure();
throw error;
}
}
onSuccess() {
this.failureCount = 0;
this.successCount++;
if (this.state === 'HALF_OPEN') {
this.state = 'CLOSED';
}
}
onFailure() {
this.failureCount++;
if (this.failureCount >= this.failureThreshold) {
this.state = 'OPEN';
this.nextAttempt = Date.now() + this.recoveryTimeout;
}
}
getStats() {
return {
state: this.state,
failureCount: this.failureCount,
successCount: this.successCount,
requestCount: this.requestCount,
nextAttempt: this.nextAttempt
};
}
}
// Usage example
class InventoryService {
async checkAvailability(productId, quantity) {
// External API call (may fail)
const response = await fetch(`${this.baseUrl}/check/${productId}`, {
method: 'POST',
body: JSON.stringify({ quantity }),
headers: { 'Content-Type': 'application/json' }
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return response.json();
}
}
const inventoryService = new InventoryService();
const inventoryCircuitBreaker = new CircuitBreaker(inventoryService, {
failureThreshold: 3,
recoveryTimeout: 30000
});
// Call via Circuit Breaker
const checkInventoryWithCircuitBreaker = async (productId, quantity) => {
try {
return await inventoryCircuitBreaker.call('checkAvailability', productId, quantity);
} catch (error) {
if (error.message === 'Circuit breaker is OPEN') {
// Fallback handling
return { available: false, reason: 'Service temporarily unavailable' };
}
throw error;
}
};
5. Docker Compose Configuration Example
# docker-compose.yml
version: '3.8'
services:
# API Gateway
api-gateway:
build: ./api-gateway
ports:
- "3001:3001"
environment:
- USER_SERVICE_URL=http://user-service:3000
- ORDER_SERVICE_URL=http://order-service:3002
- INVENTORY_SERVICE_URL=http://inventory-service:3003
- JWT_SECRET=your-secret-key
depends_on:
- user-service
- order-service
- inventory-service
networks:
- microservices
# User Service
user-service:
build: ./user-service
environment:
- MONGODB_URL=mongodb://mongo:27017/userdb
- PORT=3000
depends_on:
- mongo
networks:
- microservices
# Order Service
order-service:
build: ./order-service
environment:
- MONGODB_URL=mongodb://mongo:27017/orderdb
- REDIS_URL=redis://redis:6379
- RABBITMQ_URL=amqp://rabbitmq:5672
- PORT=3002
depends_on:
- mongo
- redis
- rabbitmq
networks:
- microservices
# Inventory Service
inventory-service:
build: ./inventory-service
environment:
- POSTGRES_URL=postgresql://postgres:password@postgres:5432/inventorydb
- REDIS_URL=redis://redis:6379
- PORT=3003
depends_on:
- postgres
- redis
networks:
- microservices
# MongoDB
mongo:
image: mongo:5.0
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
volumes:
- mongo_data:/data/db
networks:
- microservices
# PostgreSQL
postgres:
image: postgres:14
environment:
- POSTGRES_DB=inventorydb
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- microservices
# Redis
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
networks:
- microservices
# RabbitMQ
rabbitmq:
image: rabbitmq:3.9-management
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=password
ports:
- "15672:15672" # Management UI
volumes:
- rabbitmq_data:/var/lib/rabbitmq
networks:
- microservices
# Jaeger (Distributed Tracing)
jaeger:
image: jaegertracing/all-in-one:latest
ports:
- "16686:16686" # UI
- "14268:14268" # HTTP
environment:
- COLLECTOR_OTLP_ENABLED=true
networks:
- microservices
# Prometheus (Metrics Collection)
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
networks:
- microservices
# Grafana (Dashboard)
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
volumes:
- grafana_data:/var/lib/grafana
networks:
- microservices
volumes:
mongo_data:
postgres_data:
redis_data:
rabbitmq_data:
grafana_data:
networks:
microservices:
driver: bridge
This comprehensive microservices architecture guide provides understanding from theory to practice for implementing microservices. With proper design and tool selection, it's possible to build scalable and maintainable distributed systems.