March 27, 20268 min read

Nginx: Configure a Reverse Proxy and Load Balancer (Without the Confusion)

Learn Nginx from scratch -- serve static files, set up reverse proxies, configure SSL, load balance traffic, and avoid common configuration mistakes.

nginx devops deployment backend tutorial
Ad 336x280

You've built your app. It runs on port 3000. Now what? You can't just expose port 3000 to the internet. You need something sitting in front of it -- something that handles HTTPS, serves static files, routes requests, and doesn't fall over when traffic spikes. That something is usually Nginx.

Nginx (pronounced "engine-x") is a web server, reverse proxy, and load balancer. It handles millions of concurrent connections with minimal memory. It's what sits between the internet and your application, and roughly a third of the world's websites use it.

The configuration syntax looks intimidating at first, but there's a logic to it. Once you understand the mental model -- server blocks, locations, directives -- everything clicks into place.

Installing Nginx

Ubuntu/Debian:
sudo apt update
sudo apt install nginx
sudo systemctl start nginx
sudo systemctl enable nginx
CentOS/RHEL:
sudo yum install epel-release
sudo yum install nginx
sudo systemctl start nginx
sudo systemctl enable nginx
macOS:
brew install nginx
brew services start nginx

Verify it's running by visiting http://localhost (or your server's IP). You should see the Nginx welcome page.

Check the version and configuration path:

nginx -v
nginx -t # Test configuration syntax

Understanding the Configuration Structure

Nginx configuration lives in /etc/nginx/ (Linux) or /usr/local/etc/nginx/ (macOS Homebrew). The main file is nginx.conf, which includes files from sites-enabled/ or conf.d/.

The hierarchy looks like this:

nginx.conf
  ├── events { }        # Connection handling settings
  └── http { }           # HTTP server settings
       ├── server { }    # Virtual host (one per domain)
       │    ├── location / { }     # Routes for that domain
       │    └── location /api { }
       └── server { }    # Another domain

Here's a minimal nginx.conf:

worker_processes auto;  # One worker per CPU core

events {
worker_connections 1024; # Max connections per worker
}

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

# Include all site configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

Serving Static Files

The simplest use case -- serve a static website:

# /etc/nginx/sites-available/mysite.conf
server {
    listen 80;
    server_name mysite.com www.mysite.com;

root /var/www/mysite;
index index.html;

location / {
try_files $uri $uri/ =404;
}

# Cache static assets location ~* \.(css|js|jpg|jpeg|png|gif|ico|svg|woff2)$ { expires 30d; add_header Cache-Control "public, immutable"; } # Deny access to hidden files location ~ /\. { deny all; } }

Enable the site:

sudo ln -s /etc/nginx/sites-available/mysite.conf /etc/nginx/sites-enabled/
sudo nginx -t # Always test before reloading
sudo systemctl reload nginx

The try_files directive is important. It tries to serve the exact URI as a file, then as a directory (appending /), and if neither exists, returns 404. For single-page apps (React, Vue), you'd change =404 to /index.html so all routes serve the SPA:

location / {
    try_files $uri $uri/ /index.html;
}

Reverse Proxy to Your App

This is the most common Nginx use case in modern development. Your Node.js/Python/Go app runs on localhost:3000, and Nginx sits in front of it:

server {
    listen 80;
    server_name api.mysite.com;

location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}

Why not just expose port 3000 directly?

  • HTTPS termination. Nginx handles SSL so your app doesn't have to.
  • Static file serving. Nginx serves static files 10-100x faster than Node.js.
  • Connection handling. Nginx buffers slow clients so your app's threads aren't tied up.
  • Multiple apps. Run several apps on one server, each behind a different domain or path.

Proxying Multiple Apps

server {
    listen 80;
    server_name mysite.com;

# Frontend (React app on port 3000)
    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
    }

# Backend API (Express on port 4000)
    location /api/ {
        proxy_pass http://127.0.0.1:4000/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }

# WebSocket endpoint
    location /ws/ {
        proxy_pass http://127.0.0.1:4000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}

Note the trailing slash in proxy_pass http://127.0.0.1:4000/; for the /api/ location. This strips the /api/ prefix. A request to /api/users gets proxied to http://127.0.0.1:4000/users. Without the trailing slash, it would go to http://127.0.0.1:4000/api/users.

SSL/TLS with Let's Encrypt

Use Certbot to get free SSL certificates:

sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d mysite.com -d www.mysite.com

Certbot automatically modifies your Nginx config to add SSL. The result looks something like:

server {
    listen 443 ssl http2;
    server_name mysite.com www.mysite.com;

ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

# HSTS add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;

root /var/www/mysite;
index index.html;

location / {
try_files $uri $uri/ =404;
}
}

# Redirect HTTP to HTTPS server { listen 80; server_name mysite.com www.mysite.com; return 301 https://$server_name$request_uri; }

Set up auto-renewal:

sudo certbot renew --dry-run  # Test renewal
# Certbot adds a cron job or systemd timer automatically

Load Balancing

When one app server isn't enough, Nginx distributes traffic across multiple:

upstream app_servers {
    # Round-robin (default)
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
}

server {
listen 80;
server_name mysite.com;

location / {
proxy_pass http://app_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}

Load Balancing Strategies

upstream app_servers {
    # Least connections -- send to the server with fewest active connections
    least_conn;
    server 127.0.0.1:3001;
    server 127.0.0.1:3002;
    server 127.0.0.1:3003;
}

upstream app_servers {
# IP hash -- same client always goes to the same server (sticky sessions)
ip_hash;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}

upstream app_servers {
# Weighted -- more traffic to beefier servers
server 127.0.0.1:3001 weight=3; # Gets 3x the traffic
server 127.0.0.1:3002 weight=1;
server 127.0.0.1:3003 weight=1;
}

Mark servers as backup or down:

upstream app_servers {
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003 backup; # Only used when others are down
server 127.0.0.1:3004 down; # Temporarily removed
}

Health Checks

Nginx (open-source) checks backend health passively. If a proxied request fails, it marks the server as unavailable:

upstream app_servers {
    server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
    server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
}

After 3 failed requests within 30 seconds, the server is considered down for 30 seconds. Then Nginx tries it again.

Gzip Compression

Compress responses to save bandwidth:

http {
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types
        text/plain
        text/css
        text/javascript
        application/javascript
        application/json
        application/xml
        image/svg+xml;
}

This reduces transfer sizes by 60-80% for text-based content. Don't compress images (JPEG, PNG) or already-compressed files -- it wastes CPU for zero benefit.

Proxy Caching

Cache responses from your backend to reduce load:

http {
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
                     max_size=1g inactive=60m;

server {
listen 80;
server_name mysite.com;

location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache app_cache;
proxy_cache_valid 200 10m; # Cache 200 responses for 10 minutes
proxy_cache_valid 404 1m; # Cache 404s for 1 minute
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
}

# Don't cache authenticated endpoints location /api/user/ { proxy_pass http://127.0.0.1:3000; proxy_no_cache 1; proxy_cache_bypass 1; } } }

The X-Cache-Status header tells you whether a response was served from cache (HIT), fetched from upstream (MISS), or served stale (STALE).

Rate Limiting

Protect your API from abuse:

http {
    # Define a rate limit zone
    limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

server {
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
}
}
}

This allows 10 requests per second per IP, with a burst buffer of 20. Excess requests get a 503 response.

Security Headers

Add essential security headers:

server {
    # Prevent clickjacking
    add_header X-Frame-Options "SAMEORIGIN" always;

# Prevent MIME type sniffing
    add_header X-Content-Type-Options "nosniff" always;

# XSS protection
    add_header X-XSS-Protection "1; mode=block" always;

# Content Security Policy
    add_header Content-Security-Policy "default-src 'self'; script-src 'self'" always;

# Referrer policy
    add_header Referrer-Policy "strict-origin-when-cross-origin" always;
}

Common Mistakes

Not testing before reloading. Always run nginx -t before systemctl reload nginx. A syntax error in any included config file will take down every site on the server. Forgetting the trailing slash in proxy_pass. proxy_pass http://backend/ and proxy_pass http://backend behave differently with location prefixes. Test your URL mapping. Using root inside location. This changes the document root for that location, which is usually not what you want. Use alias inside location blocks and root in the server block. Not setting client_max_body_size. The default is 1MB. If your app accepts file uploads, you need to increase this or users get 413 Request Entity Too Large:
client_max_body_size 50m;
Running as root. Nginx master process runs as root (to bind port 80/443), but worker processes should run as www-data or nginx user. Check your user directive. Ignoring log files. Nginx logs to /var/log/nginx/access.log and /var/log/nginx/error.log. When something breaks, the error log tells you exactly what happened. Set up log rotation so they don't fill your disk.

What's Next

Nginx is one of those tools where the basics get you very far, but the depth is endless:

  • Nginx Plus -- Commercial version with active health checks, session persistence, and a dashboard
  • OpenResty -- Nginx with embedded Lua scripting for dynamic behavior
  • ModSecurity -- Web Application Firewall module for Nginx
  • Ingress Controller -- Nginx as a Kubernetes ingress controller
  • HTTP/3 and QUIC -- Experimental support in recent versions
  • Dynamic upstreams -- Consul or etcd integration for service discovery
Start with a basic reverse proxy and SSL. That covers 80% of real-world Nginx usage. Add load balancing and caching when your traffic demands it.

For more deployment and DevOps tutorials, check out CodeUp.

Ad 728x90