Nginx (Proxy)

High-performance reverse proxy server. HTTP/2, HTTP/3 support, SSL termination, caching capabilities. Low memory usage with event-driven architecture.

Web ServerReverse ProxyLoad BalancerHigh PerformanceLightweightAsynchronousHTTPHTTPS

Server

NGINX

Overview

NGINX is a high-performance, scalable web server, reverse proxy, and load balancer. With its asynchronous event-driven architecture, it can handle numerous concurrent connections with minimal memory usage, excelling at serving static content, processing dynamic applications, and load balancing. It's widely recognized for its stability, rich features, and efficient resource utilization in web infrastructure.

Details

NGINX uses an asynchronous, event-driven architecture that solves the C10K problem effectively. Unlike traditional servers that create threads for each connection, NGINX uses a master-worker process model where workers handle thousands of connections in event loops. This design enables excellent performance with low memory footprint. As a reverse proxy and load balancer, NGINX provides advanced features including SSL/TLS termination, HTTP/2 support, caching, compression, and various load balancing algorithms. It supports both HTTP and stream (TCP/UDP) load balancing, making it versatile for different application architectures.

Key Features

  • High-Performance Architecture: Asynchronous event-driven processing with low memory usage
  • Versatile Proxy Capabilities: HTTP reverse proxy and TCP/UDP stream proxy support
  • Advanced Load Balancing: Multiple algorithms including round-robin, least connections, and hash-based distribution
  • SSL/TLS Termination: Efficient SSL processing and offloading from backend servers
  • Flexible Configuration: Intuitive configuration syntax with modular architecture
  • Built-in Caching: Proxy caching and FastCGI caching for performance optimization

Pros and Cons

Pros

  • Exceptional performance with low resource consumption for high-traffic websites
  • Mature and stable with extensive community support and documentation
  • Highly flexible configuration system suitable for complex deployment scenarios
  • Built-in security features including rate limiting and access controls
  • Strong ecosystem with numerous third-party modules and integrations
  • Proven reliability in production environments worldwide

Cons

  • Steeper learning curve for advanced configurations compared to simpler alternatives
  • Dynamic configuration changes require reload (except NGINX Plus)
  • Some advanced features require NGINX Plus (commercial version)
  • Complex configurations can become difficult to maintain and debug
  • Limited built-in application-layer routing compared to specialized API gateways

Reference Pages

Code Examples

Installation and Basic Setup

# Ubuntu/Debian installation
curl -fsSL https://nginx.org/keys/nginx_signing.key | sudo gpg --dearmor -o /usr/share/keyrings/nginx-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/nginx-archive-keyring.gpg] http://nginx.org/packages/ubuntu $(lsb_release -cs) nginx" | sudo tee /etc/apt/sources.list.d/nginx.list

sudo apt update
sudo apt install nginx

# Enable and start service
sudo systemctl enable nginx
sudo systemctl start nginx

# CentOS/RHEL installation
cat <<EOF | sudo tee /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/\$releasever/\$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOF

sudo yum install nginx
sudo systemctl enable nginx
sudo systemctl start nginx

# Docker deployment
docker run --name my-nginx -p 80:80 -d nginx

# Custom configuration with Docker
docker run --name my-nginx \
  -p 80:80 \
  -p 443:443 \
  -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro \
  -v /path/to/html:/usr/share/nginx/html:ro \
  -v /path/to/ssl:/etc/nginx/ssl:ro \
  -d nginx

Basic Configuration and Virtual Hosts

# Main configuration (nginx.conf)
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
    multi_accept on;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 4096;

    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml+rss
        application/atom+xml
        image/svg+xml;

    # Security headers
    add_header X-Frame-Options SAMEORIGIN;
    add_header X-Content-Type-Options nosniff;
    add_header X-XSS-Protection "1; mode=block";

    include /etc/nginx/conf.d/*.conf;
}

# Virtual host configuration (/etc/nginx/conf.d/default.conf)
server {
    listen 80;
    server_name example.com www.example.com;
    root /usr/share/nginx/html;
    index index.html index.htm;

    location / {
        try_files $uri $uri/ =404;
    }

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }

    location ~* \.(css|js|png|jpg|jpeg|gif|ico|svg)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
    }

    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
}

HTTP Load Balancing Configuration

# Upstream server definitions
upstream backend {
    # Round-robin (default)
    server backend1.example.com;
    server backend2.example.com;
    server backend3.example.com backup;
}

upstream api_servers {
    # Least connections
    least_conn;
    server api1.example.com:8080 weight=3;
    server api2.example.com:8080 weight=2;
    server api3.example.com:8080 weight=1;
}

upstream cache_servers {
    # Hash-based distribution (consistent hashing)
    hash $request_uri consistent;
    server cache1.example.com:8080;
    server cache2.example.com:8080;
    server cache3.example.com:8080;
}

# Load balancer server configuration
server {
    listen 80;
    server_name load-balancer.example.com;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Health checks
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_connect_timeout 5s;
        proxy_read_timeout 10s;
    }

    location /api/ {
        proxy_pass http://api_servers/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

    location /cache/ {
        proxy_pass http://cache_servers/;
        proxy_set_header Host $host;
        proxy_cache_bypass $http_pragma $http_authorization;
    }
}

# Advanced load balancing with health checks
upstream app_servers {
    # Server configuration options
    server app1.example.com:8080 max_fails=3 fail_timeout=30s weight=5;
    server app2.example.com:8080 max_fails=3 fail_timeout=30s weight=3;
    server app3.example.com:8080 max_fails=3 fail_timeout=30s weight=2;
    server app4.example.com:8080 max_conns=100;
    
    # Backup server
    server backup.example.com:8080 backup;
    
    # Keep-alive connections
    keepalive 32;
}

TCP/UDP Stream Load Balancing

# Stream context for TCP/UDP load balancing
stream {
    upstream mysql_servers {
        least_conn;
        server mysql1.example.com:3306 max_fails=2 fail_timeout=30s;
        server mysql2.example.com:3306 max_fails=2 fail_timeout=30s;
        server mysql3.example.com:3306 backup;
    }

    upstream redis_servers {
        hash $remote_addr consistent;
        server redis1.example.com:6379;
        server redis2.example.com:6379;
        server redis3.example.com:6379;
    }

    upstream dns_servers {
        server 8.8.8.8:53;
        server 8.8.4.4:53;
        server 1.1.1.1:53;
    }

    # MySQL proxy
    server {
        listen 3306;
        proxy_pass mysql_servers;
        proxy_timeout 3s;
        proxy_connect_timeout 1s;
    }

    # Redis proxy
    server {
        listen 6379;
        proxy_pass redis_servers;
        proxy_timeout 5s;
        proxy_responses 1;
    }

    # DNS proxy (UDP)
    server {
        listen 53 udp;
        proxy_pass dns_servers;
        proxy_timeout 1s;
        proxy_responses 1;
    }
}

SSL/TLS Configuration

server {
    listen 443 ssl http2;
    server_name secure.example.com;

    # SSL certificates
    ssl_certificate /etc/nginx/ssl/example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/example.com.key;

    # SSL settings
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers off;

    # HSTS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # OCSP stapling
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/nginx/ssl/chain.crt;

    # Session settings
    ssl_session_cache shared:SSL:50m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

# HTTP to HTTPS redirect
server {
    listen 80;
    server_name secure.example.com;
    return 301 https://$server_name$request_uri;
}

# Let's Encrypt certificate setup
# Install certbot
# sudo apt install certbot python3-certbot-nginx

# Obtain certificate
# sudo certbot --nginx -d example.com -d www.example.com

# Auto-renewal cron job
# 0 12 * * * /usr/bin/certbot renew --quiet

Caching Configuration

# Proxy cache path definition
proxy_cache_path /var/cache/nginx/proxy 
    levels=1:2 
    keys_zone=proxy_cache:10m 
    max_size=1g 
    inactive=60m 
    use_temp_path=off;

server {
    listen 80;
    server_name cache.example.com;

    location / {
        proxy_pass http://backend;
        proxy_cache proxy_cache;
        proxy_cache_valid 200 302 10m;
        proxy_cache_valid 404 1m;
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_lock on;
        proxy_cache_background_update on;

        # Cache headers
        add_header X-Cache-Status $upstream_cache_status;
        
        # Cache key
        proxy_cache_key "$scheme$request_method$host$request_uri";
        
        # Conditional caching
        proxy_cache_bypass $http_pragma $http_authorization;
        proxy_no_cache $http_pragma $http_authorization;
    }

    # Cache purge endpoint (NGINX Plus)
    location ~ /purge(/.*) {
        allow 127.0.0.1;
        deny all;
        proxy_cache_purge proxy_cache "$scheme$request_method$host$1";
    }
}

# FastCGI cache configuration
fastcgi_cache_path /var/cache/nginx/fastcgi 
    levels=1:2 
    keys_zone=fastcgi_cache:10m 
    max_size=500m 
    inactive=60m;

server {
    listen 80;
    server_name php.example.com;
    root /var/www/html;

    location ~ \.php$ {
        fastcgi_pass 127.0.0.1:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;

        # FastCGI cache
        fastcgi_cache fastcgi_cache;
        fastcgi_cache_valid 200 60m;
        fastcgi_cache_valid 404 10m;
        fastcgi_cache_methods GET HEAD;
        fastcgi_cache_key "$scheme$request_method$host$request_uri";
        
        # Cache bypass conditions
        fastcgi_cache_bypass $http_pragma $http_authorization $cookie_nocache $arg_nocache;
        fastcgi_no_cache $http_pragma $http_authorization $cookie_nocache $arg_nocache;
        
        add_header X-FastCGI-Cache $upstream_cache_status;
    }
}

Security and Rate Limiting

# Rate limiting zone definitions
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

server {
    listen 80;
    server_name api.example.com;

    # Connection limit
    limit_conn conn_limit 20;

    location /api/ {
        # API rate limiting
        limit_req zone=api_limit burst=20 nodelay;
        proxy_pass http://api_backend;
    }

    location /login {
        # Login rate limiting
        limit_req zone=login_limit burst=5;
        proxy_pass http://auth_backend;
    }

    # IP-based access control
    location /admin/ {
        allow 192.168.1.0/24;
        allow 10.0.0.0/8;
        deny all;
        proxy_pass http://admin_backend;
    }

    # Basic authentication
    location /private/ {
        auth_basic "Restricted Area";
        auth_basic_user_file /etc/nginx/htpasswd;
        proxy_pass http://private_backend;
    }
}

# Security headers configuration
server {
    listen 443 ssl http2;
    server_name secure.example.com;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header X-XSS-Protection "1; mode=block" always;
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
    add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'" always;

    # HSTS
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;

    # Hide server information
    server_tokens off;

    location / {
        proxy_pass http://backend;
        proxy_hide_header X-Powered-By;
    }
}

Monitoring and Status

# Status and monitoring endpoints
server {
    listen 80;
    server_name status.example.com;

    location /nginx_status {
        stub_status;
        allow 127.0.0.1;
        allow 192.168.1.0/24;
        deny all;
    }

    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }

    # NGINX Plus API
    location /api {
        api write=on;
        allow 127.0.0.1;
        deny all;
    }

    # NGINX Plus dashboard
    location = /dashboard.html {
        root /usr/share/nginx/html;
    }
}

# Custom log formats
log_format detailed '$remote_addr - $remote_user [$time_local] '
                   '"$request" $status $body_bytes_sent '
                   '"$http_referer" "$http_user_agent" '
                   '$request_time $upstream_response_time '
                   '$upstream_addr $upstream_status';

log_format json escape=json '{'
    '"timestamp": "$time_iso8601",'
    '"remote_addr": "$remote_addr",'
    '"request": "$request",'
    '"status": $status,'
    '"body_bytes_sent": $body_bytes_sent,'
    '"request_time": $request_time,'
    '"upstream_response_time": "$upstream_response_time",'
    '"upstream_addr": "$upstream_addr"'
'}';

server {
    listen 80;
    server_name example.com;

    # Multiple access logs
    access_log /var/log/nginx/access.log detailed;
    access_log /var/log/nginx/access.json json;

    # Error log level
    error_log /var/log/nginx/error.log warn;

    location / {
        proxy_pass http://backend;
    }
}

Performance Optimization

# Worker process optimization
user nginx;
worker_processes auto;
worker_cpu_affinity auto;
worker_priority -10;
worker_rlimit_nofile 65535;

events {
    worker_connections 4096;
    use epoll;
    multi_accept on;
    accept_mutex off;
}

http {
    # Buffer size optimization
    client_body_buffer_size 128k;
    client_max_body_size 10m;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 4k;
    output_buffers 1 32k;
    postpone_output 1460;

    # Timeout settings
    client_header_timeout 3m;
    client_body_timeout 3m;
    send_timeout 3m;

    # TCP optimization
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;

    # Keepalive settings
    keepalive_timeout 65;
    keepalive_requests 1000;

    # File cache
    open_file_cache max=1000 inactive=20s;
    open_file_cache_valid 30s;
    open_file_cache_min_uses 2;
    open_file_cache_errors on;

    # Advanced compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types
        text/plain
        text/css
        text/xml
        text/javascript
        application/json
        application/javascript
        application/xml+rss
        application/atom+xml
        image/svg+xml
        application/x-font-ttf
        application/vnd.ms-fontobject
        font/opentype;

    # Brotli compression (requires module)
    brotli on;
    brotli_comp_level 6;
    brotli_types
        text/plain
        text/css
        application/json
        application/javascript
        text/xml
        application/xml
        application/xml+rss
        text/javascript;
}

Troubleshooting and Maintenance

# Configuration testing and management
nginx -t                          # Test configuration
nginx -s reload                   # Reload configuration
nginx -s stop                     # Stop server
nginx -s quit                     # Graceful shutdown

# Service management
systemctl status nginx
systemctl restart nginx
systemctl reload nginx

# Log analysis
tail -f /var/log/nginx/error.log
tail -f /var/log/nginx/access.log | grep "GET"

# Process information
ps aux | grep nginx

# Common fixes in configuration
# 504 Gateway Timeout
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_send_timeout 300s;

# 413 Request Entity Too Large
client_max_body_size 100M;

# 502 Bad Gateway
upstream backend {
    server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
    keepalive 32;
}

# WebSocket support
location /ws/ {
    proxy_pass http://websocket_backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_set_header Host $host;
    proxy_cache_bypass $http_upgrade;
}

NGINX stands as one of the most reliable and efficient web servers, reverse proxies, and load balancers available today. Its event-driven architecture, extensive feature set, and proven stability make it an excellent choice for building high-performance, scalable web infrastructure that can handle demanding workloads efficiently.