Configuration File Structure#

Nginx configuration follows a hierarchical block structure. The main configuration file is typically /etc/nginx/nginx.conf, which includes files from /etc/nginx/conf.d/ or /etc/nginx/sites-enabled/.

# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;           # Match CPU core count
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;

events {
    worker_connections 1024;     # Per worker process
    multi_accept on;             # Accept multiple connections at once
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] '
                    '"$request" $status $body_bytes_sent '
                    '"$http_referer" "$http_user_agent" '
                    '$request_time $upstream_response_time';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 16m;

    include /etc/nginx/conf.d/*.conf;
}

The worker_processes auto directive sets one worker per CPU core. Each worker handles worker_connections simultaneous connections, so total capacity is worker_processes * worker_connections. For most production deployments, auto with 1024 connections per worker is sufficient.

Server Blocks#

Server blocks (virtual hosts) define how Nginx handles requests for different domains. Nginx selects the server block by matching the Host header against server_name directives.

# /etc/nginx/conf.d/app.example.com.conf
server {
    listen 80;
    server_name app.example.com;
    return 301 https://$host$request_uri;    # Redirect HTTP to HTTPS
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate     /etc/nginx/ssl/app.example.com.crt;
    ssl_certificate_key /etc/nginx/ssl/app.example.com.key;

    root /var/www/app;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }
}

When no server_name matches, Nginx uses the default_server block. Always define one explicitly to avoid serving the wrong content:

server {
    listen 80 default_server;
    server_name _;
    return 444;    # Close connection without response
}

Location Matching#

Location blocks control request handling based on the URI path. Nginx evaluates locations in a specific priority order, not in the order they appear in the file.

Matching priority (highest to lowest):

  1. = /exact – Exact match. Checked first, stops on match.
  2. ^~ /prefix – Preferential prefix. If this prefix matches, regex locations are skipped.
  3. ~ /regex and ~* /regex – Regular expression (~ is case-sensitive, ~* is case-insensitive). Evaluated in order of appearance.
  4. /prefix – Standard prefix match. Longest match wins.
# Exact match: only GET /health
location = /health {
    access_log off;
    return 200 "OK\n";
}

# Preferential prefix: static files bypass regex matching
location ^~ /static/ {
    alias /var/www/static/;
    expires 30d;
    add_header Cache-Control "public, immutable";
}

# Regex: match file extensions
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
    expires 7d;
    add_header Cache-Control "public";
}

# Standard prefix: everything else goes to the application
location / {
    proxy_pass http://app_backend;
}

A common mistake is placing a regex location before a prefix location and expecting the prefix to take priority. The regex will win unless the prefix uses ^~. When debugging location matching, use nginx -T to dump the full resolved configuration and trace the match logic.

Reverse Proxy Configuration#

Reverse proxying passes client requests to a backend server. The key directives control how Nginx communicates with the upstream and what information it forwards.

upstream app_backend {
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
    keepalive 32;    # Persistent connections to backends
}

server {
    listen 443 ssl http2;
    server_name app.example.com;

    location / {
        proxy_pass http://app_backend;

        # Pass original client information
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Timeouts
        proxy_connect_timeout 5s;      # Time to establish connection to backend
        proxy_send_timeout 60s;        # Time to send request body to backend
        proxy_read_timeout 60s;        # Time to read response from backend

        # Buffering
        proxy_buffering on;
        proxy_buffer_size 4k;
        proxy_buffers 8 16k;

        # Keepalive to upstream
        proxy_http_version 1.1;
        proxy_set_header Connection "";
    }
}

The proxy_set_header Host $host is critical. Without it, the backend receives the upstream name (like app_backend) as the Host header instead of the actual domain name. The X-Forwarded-For and X-Forwarded-Proto headers tell the backend the client’s real IP and protocol, which is necessary for logging, access control, and redirect generation.

For WebSocket proxying, add the upgrade headers:

location /ws {
    proxy_pass http://app_backend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 3600s;    # Keep WebSocket connections alive
}

SSL/TLS Termination#

A production TLS configuration balances security and compatibility. The following settings support modern browsers while excluding known-weak ciphers and protocols.

ssl_certificate     /etc/nginx/ssl/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/privkey.pem;

ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;    # Let the client choose (modern best practice)

ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1d;
ssl_session_tickets off;

# OCSP stapling -- serve certificate status with the TLS handshake
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/nginx/ssl/chain.pem;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

Place these directives in the http block to apply globally, or in individual server blocks for per-domain settings. TLS 1.0 and 1.1 are deprecated and should not be enabled. The ssl_session_tickets off directive prevents ticket keys from being a target for forward secrecy attacks when tickets are not rotated properly.

Rate Limiting#

Rate limiting protects backends from abuse and resource exhaustion. Nginx uses the leaky bucket algorithm.

# Define rate limit zones in the http block
http {
    # 10 requests per second per client IP
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

    # 1 request per second for login endpoints
    limit_req_zone $binary_remote_addr zone=login_limit:10m rate=1r/s;

    # Connection limit per IP
    limit_conn_zone $binary_remote_addr zone=conn_limit:10m;
}

server {
    location /api/ {
        limit_req zone=api_limit burst=20 nodelay;
        limit_req_status 429;
        proxy_pass http://app_backend;
    }

    location /login {
        limit_req zone=login_limit burst=5;
        limit_req_status 429;
        proxy_pass http://app_backend;
    }

    location / {
        limit_conn conn_limit 10;    # Max 10 concurrent connections per IP
        proxy_pass http://app_backend;
    }
}

The burst parameter allows temporary spikes above the rate. With burst=20 nodelay, Nginx allows 20 requests to pass immediately even if they exceed the rate, then enforces the rate for subsequent requests. Without nodelay, burst requests are queued and released at the defined rate, which adds latency. For API endpoints, nodelay is usually preferred because clients expect immediate responses or rejection, not artificial delays.

Proxy Caching#

Nginx can cache upstream responses to reduce backend load and improve response times.

http {
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
                     max_size=1g inactive=60m use_temp_path=off;
}

server {
    location /api/ {
        proxy_pass http://app_backend;
        proxy_cache app_cache;
        proxy_cache_valid 200 10m;           # Cache 200 responses for 10 minutes
        proxy_cache_valid 404 1m;            # Cache 404 responses for 1 minute
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503;
        proxy_cache_lock on;                 # Prevent thundering herd

        add_header X-Cache-Status $upstream_cache_status;    # HIT, MISS, BYPASS
    }

    # Bypass cache for authenticated requests
    location /api/user/ {
        proxy_pass http://app_backend;
        proxy_cache app_cache;
        proxy_cache_bypass $http_authorization;
        proxy_no_cache $http_authorization;
    }
}

The proxy_cache_use_stale directive is important for resilience. When the backend is down or slow, Nginx serves stale cached responses instead of returning errors. The proxy_cache_lock on directive ensures that when multiple requests arrive for the same uncached resource simultaneously, only one request goes to the backend and the others wait for its response. This prevents a thundering herd from overwhelming the backend when a popular cache key expires.

Load Balancing#

Nginx supports multiple load balancing algorithms through the upstream block.

# Round-robin (default)
upstream app_rr {
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
}

# Least connections
upstream app_lc {
    least_conn;
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
}

# IP hash (session affinity)
upstream app_sticky {
    ip_hash;
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
}

# Weighted distribution
upstream app_weighted {
    server 10.0.1.10:8080 weight=5;    # Gets 5x the traffic
    server 10.0.1.11:8080 weight=3;
    server 10.0.1.12:8080 weight=1;    # Canary or weaker instance
}

Mark servers as backup to use them only when all primary servers are unavailable:

upstream app_with_backup {
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
    server 10.0.1.99:8080 backup;    # Only used if primaries are all down
}

Health Checks#

Nginx open-source performs passive health checks by monitoring responses from backends. If a backend returns errors, Nginx temporarily removes it from the pool.

upstream app_backend {
    server 10.0.1.10:8080 max_fails=3 fail_timeout=30s;
    server 10.0.1.11:8080 max_fails=3 fail_timeout=30s;
}

With max_fails=3 and fail_timeout=30s, if a backend fails 3 times within 30 seconds, Nginx marks it as unavailable for the next 30 seconds. After the timeout, Nginx tries the backend again. A “failure” is a connection timeout, connection refusal, or a response with a status code defined as an error (by proxy_next_upstream).

Control what counts as a failure and whether Nginx retries the next server:

location / {
    proxy_pass http://app_backend;
    proxy_next_upstream error timeout http_502 http_503;
    proxy_next_upstream_tries 2;        # Try at most 2 backends
    proxy_next_upstream_timeout 10s;    # Give up after 10 seconds total
}

Active health checks (probing a /health endpoint on a schedule) are available only in Nginx Plus (the commercial version). For active health checks with open-source Nginx, use an external tool that updates the upstream configuration dynamically, or use a load balancer that supports them natively (such as HAProxy).

Security Headers#

Security headers instruct browsers to enforce security policies. Add them in the server or location block.

server {
    # Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

    # Prevent clickjacking
    add_header X-Frame-Options "SAMEORIGIN" always;

    # Enable XSS filtering (legacy browsers)
    add_header X-XSS-Protection "1; mode=block" always;

    # HSTS -- force HTTPS for 1 year, including subdomains
    add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

    # Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self'; style-src 'self' 'unsafe-inline'; img-src 'self' data:;" always;

    # Referrer policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;

    # Permissions policy
    add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always;
}

The always parameter ensures headers are added to all response codes, including errors. Without always, Nginx adds headers only to successful responses (2xx and 3xx), which means error pages lack security headers.

A critical gotcha: add_header directives in a location block override all add_header directives from the parent server block. If you add a custom header in a location, you must re-declare all security headers in that location. To avoid this, define headers in a separate file and include it:

# /etc/nginx/snippets/security-headers.conf
add_header X-Content-Type-Options "nosniff" always;
add_header X-Frame-Options "SAMEORIGIN" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

# In server or location blocks:
include /etc/nginx/snippets/security-headers.conf;

Common Gotchas#

Trailing slash in proxy_pass. proxy_pass http://backend and proxy_pass http://backend/ behave differently. Without a trailing slash, the full original URI is passed to the backend. With a trailing slash, the matched location prefix is stripped. For location /api/ with proxy_pass http://backend/, a request to /api/users is forwarded as /users. This is a frequent source of routing bugs.

try_files and proxy_pass conflict. You cannot use try_files and proxy_pass in the same location block. Use a named location for the fallback:

location / {
    try_files $uri $uri/ @backend;
}
location @backend {
    proxy_pass http://app_backend;
}

Configuration testing. Always run nginx -t before reloading. A syntax error in the configuration will cause nginx -s reload to fail, but the existing configuration continues to serve traffic. Use nginx -T (capital T) to dump the full resolved configuration, which is invaluable for debugging include files and variable expansion.

Log rotation. Nginx holds open file descriptors for log files. After log rotation (via logrotate), send nginx -s reopen to make Nginx open new file handles. Without this, Nginx continues writing to the rotated (and now deleted or moved) file.