Setting Up Nginx as a Reverse Proxy with Load Balancing for a Node.js Cluster Using Docker

Setting Up Nginx as a Reverse Proxy with Load Balancing for a Node.js Cluster Using Docker

In this blog, I’ll walk you through a project where I set up an NGINX server as a reverse proxy to handle requests for a Node.js application cluster. The setup uses Docker to containerize both the NGINX server and the Node.js applications, enabling seamless scaling and management. By the end of this, you'll understand why NGINX is an essential tool for modern web development and how to configure it for such use cases.

What is Nginx?

Nginx (pronounced "engine-x") is a high-performance, open-source web server that also functions as a reverse proxy, load balancer, and HTTP cache. Originally developed to solve the C10k problem (handling 10,000 simultaneous connections), Nginx has grown to be one of the most widely used web servers and proxy solutions globally.

Nginx is known for its event-driven, asynchronous architecture, which allows it to handle high concurrency efficiently compared to traditional thread-based web servers like Apache. It operates on a non-blocking model, making it well-suited for handling modern web applications with thousands of concurrent connections.

What is Nginx Used For?

1. Load Balancing with Nginx

Load balancing is a key feature of Nginx that distributes incoming network traffic across multiple servers, improving application availability, performance, and scalability.

Types of Load Balancing in Nginx

  • Round Robin (Default): Each request is forwarded to a different backend server in sequence.

  • Least Connections: Sends requests to the server with the fewest active connections.

  • IP Hash: Routes requests from the same IP to the same backend, useful for session persistence.

  • Weight-Based Load Balancing: Assigns different weights to servers based on capacity.

2. Reverse Proxy with Nginx

Nginx acts as a reverse proxy by sitting between clients and backend servers. It accepts requests from clients and forwards them to backend servers while handling tasks like SSL termination, caching, and authentication.

Why Use a Reverse Proxy?

  • Security: Hides backend architecture from direct exposure.

  • Load Distribution: Distributes traffic among multiple servers.

  • SSL Termination: Offloads SSL/TLS encryption to improve backend performance.

  • Compression & Optimization: Reduces data transfer size with Gzip.

3. Caching in Nginx

Nginx can cache static and dynamic content to reduce backend load and speed up responses.

Types of Caching in Nginx

  • Static Content Caching (e.g., images, CSS, JavaScript).

  • Microcaching (Short-lived caching of dynamic content).

  • Reverse Proxy Caching (Caching responses from backend servers).

4. Security Features in Nginx

Nginx includes several security mechanisms:

  • DDoS Mitigation: Rate limiting and connection throttling.

  • IP Blocking: Blocking access from specific IPs.

  • SSL/TLS Encryption: Secure HTTPS connections.

  • WAF (Web Application Firewall): Protects against common web attacks.

5. Compression & Segmentation in Nginx

Compression reduces data transfer size, improving website speed. Nginx supports Gzip and Brotli compression.

6. Nginx as a Load Balancer in Kubernetes

In Kubernetes (K8s), Nginx can act as an Ingress Controller, managing external access to services within a cluster.

Why Use Nginx as a K8s Load Balancer?

  • Handles External Traffic: Directs incoming requests to the appropriate services.

  • Supports SSL Termination: Manages HTTPS connections.

  • Path-Based Routing: Routes traffic based on URL paths.

  • Rate Limiting & Security: Protects against excessive requests.

Project Overview

Here is the Github link for the project which consists of the source code, custom nginx configuration created by me and the Docker-Compose file used to containerize the whole setup.

The setup consists of the following components:

  • Nginx Server: Listens on port 80 and forwards requests to the Node.js cluster.

  • Node.js Cluster: Contains three Docker containers, each running a Node.js application on port 3000.

  • Round-Robin Load Balancing: Nginx distributes incoming requests evenly among the three Node.js containers.

How the Setup Works

  1. A client sends an HTTP request to the NGINX server on port 80.

  2. NGINX, acting as a reverse proxy, forwards the request to one of the Node.js containers using a round-robin load-balancing strategy.

  3. The Node.js container processes the request and returns the response via NGINX.

Custom NGINX Configuration

Below is the NGINX configuration file written by me used in this project:

worker_processes 1;

events {
    worker_connections 1024;
}

http {
    include mime.types;

    upstream nodejs_cluster {
        server 127.0.0.1:3001;
        server 127.0.0.1:3002;
        server 127.0.0.1:3003;
    }

    server {
        listen 80;  
        server_name localhost;

        location / {
            proxy_pass http://nodejs_cluster;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Explanation of the Nginx Configuration

The provided Nginx configuration sets up a reverse proxy and load balancer to distribute incoming requests across three Node.js instances running on different ports (3001, 3002, and 3003). Here's a breakdown:

  1. worker_processes 1;

    • Defines the number of worker processes.

    • In this case, only one worker process is allocated, which is fine for small-scale setups.

  1. events { worker_connections 1024; }

    • Configures how many simultaneous connections Nginx can handle per worker process.

    • Here, 1,024 concurrent connections are allowed.

  1. http { include mime.types; }

    • Includes MIME types to correctly serve different file types.
  1. upstream nodejs_cluster {}

     upstream nodejs_cluster {
         server 127.0.0.1:3001;
         server 127.0.0.1:3002;
         server 127.0.0.1:3003;
     }
    
    • Defines a load-balancing group named nodejs_cluster that consists of three backend servers running on localhost at ports 3001, 3002, and 3003.

    • By default, round-robin load balancing is used (i.e., each request is forwarded to a different backend server in sequence).

server {} - Configuring Nginx as a Reverse Proxy

server {
    listen 80;
    server_name localhost;

    location / {
        proxy_pass http://nodejs_cluster;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}
  • listen 80; → Configures Nginx to listen for HTTP requests on port 80.

  • server_name localhost; → Specifies that Nginx will handle requests directed to localhost.

  • location / {} → Defines a routing rule:

    • proxy_pass http://nodejs_cluster; → Requests will be forwarded to the nodejs_cluster, distributing traffic among the three backend Node.js instances.

    • proxy_set_header Host $host; → Ensures the original Host header is passed to the backend.

    • proxy_set_header X-Real-IP $remote_addr; → Passes the real client IP to the backend instead of Nginx’s IP.

Docker Compose File

Here’s the docker-compose.yml file that defines the entire setup:

version: '3'
services:
  app1:
    build: .
    environment:
      - APP_NAME=App1
    ports:
      - "3001:3000"

  app2:
    build: .
    environment:
      - APP_NAME=App2
    ports:
      - "3002:3000"

  app3:
    build: .
    environment:
      - APP_NAME=App3
    ports:
      - "3003:3000"

Explanation of docker-compose.yml

  1. Defines a multi-container setup using Docker Compose version 3.

  2. Three Node.js services (app1, app2, app3) are created, each running a Node.js application.

  3. Each service has a unique environment variable (APP_NAME) to differentiate instances.

  4. Ports are mapped uniquely:

    • app1 → Maps container port 3000 to host port 3001.

    • app2 → Maps container port 3000 to host port 3002.

    • app3 → Maps container port 3000 to host port 3003.

  5. Ensures each instance runs independently while allowing Nginx to load balance requests among them.

Running the Project

  1. Build and start the containers:

      docker-compose up --build
    
  2. Access the application in your browser:

      http://localhost:80
    

    NGINX will forward the request to one of the Node.js containers and return the response.

To verify the Round-Robin loadbalancing approach check the logs :

As you can requests are served by different containers, in our case App1,2,3 as mentioned in the Docker Compose file.

Conclusion

Well that’s pretty much it. We understood what Nginx is, what functionalities it offers and how to setup a Nginx server as a reverse-proxy for our upstream servers. This was a simple setup, but for any other projects the logic remains the same with more configurations here and there.