Example Nginx Load-Balancer Configuration

You can use a NginX web server as a load balancer in PrivX-HA deployments. This chapter provides an example configuration for an Nginx load balancer, along with any prerequisites required for the example configuration. For cloud based installations, using cloud provider's load balancer is recommended instead.

In the following examples the IP addresses 192.0.2.100 and 192.0.2.101 need to be substituted with your PrivX-server addresses.

The Nginx used for the load balancer must be built with additional packages, for TLS-passthrough and sticky-session support. If you are building Nginx yourself, configure Nginx with the similarly to the following (replace path/to/nginx-sticky-module-ng with the path to the nginx-sticky-module-ng module, available from https://github.com/Refinitiv/nginx-sticky-module-ng

# ./configure \
--with-stream \
--add-module=path/to/nginx-sticky-module-ng

The sample configuration requires that the SELinux policy allows Nginx process to listen at ports 2222 and 3389. Run the following commands to add these ports to the list of allowed listen ports for Nginx:

semanage port -a -t http_port_t -p tcp 2222
semanage port -a -t http_port_t -p tcp 3389

The sample configuration also requires you to install a TLS key and certificate to the following paths respectively:

/etc/nginx/server.key
/etc/nginx/server.crt

After setting up Nginx with the required modules, you can use the following Nginx configuration for testing your environment:

# NGINX Load-balancer example configuration.
# Requests are distributed between servers using sticky sessions.
# The upstream module contains the IP addresses of the PrivX servers.
# In this example two instances of PrivX are running
# with private IP addresses 192.0.2.100 and 192.0.2.101
# HTTP requests except for CRLs are redirected to HTTPS.

load_module "/usr/lib64/nginx/modules/ngx_stream_module.so";

worker_processes auto; # auto will autodetect the number of CPUs and set process count accordingly
worker_rlimit_nofile 4096; # should be at least twice as large as worker_connections, or rather worker_processes * number of workers * 

events {
   worker_connections 1024; # sockets per worker process
}

http {
   upstream privx_http {
      sticky;
      server 192.0.2.100:80;
      server 192.0.2.101:80;
   }

   upstream privx_https {
       sticky;
       server 192.0.2.100:443;
       server 192.0.2.101:443;
   }

   client_max_body_size 50M;

   server {
       listen 80 default_server;
       listen [::]:80 default_server;
       location / {
          proxy_set_header X-Forwarded-Host $http_host;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header Host $http_host;
          proxy_pass http://privx_http;
       }
   }

   server {
       listen 443 ssl http2 default_server;
       listen [::]:443 ssl http2 default_server;
       ssl_certificate "/etc/nginx/server.crt";
       ssl_certificate_key "/etc/nginx/server.key";

       location / {
           proxy_set_header X-Forwarded-Host $http_host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header Host $http_host;
           proxy_pass https://privx_https;
       }

       location ~ ^/(ssh-proxy|ssh-mitm|rdp-proxy|rdp-mitm|extender-service|network-access-manager)/api/v1/ws {
           proxy_set_header X-Forwarded-Host $http_host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_pass https://privx_https;
           proxy_set_header Host $http_host;
           proxy_set_header Upgrade $http_upgrade;
           proxy_set_header Connection "Upgrade";
       }
   }
}

stream {
    upstream privx_ssh-mitm {
        hash $remote_addr consistent;
        server 192.0.2.100:2222;
        server 192.0.2.101:2222;
    }
    
    upstream privx_rdp-mitm {
        hash $remote_addr consistent;
        server 192.0.2.100:3389;
        server 192.0.2.101:3389;
    }

    upstream privx_cert {
        hash $remote_addr consistent;
        server 192.0.2.100:8443;
        server 192.0.2.101:8443;
    }
    
    upstream privx_ssh-mitm-proxy {
        hash $remote_addr consistent;
        server 192.0.2.100:1080;
        server 192.0.2.101:1080;
    }

    
    server {
        listen        2222;
        proxy_pass    privx_ssh-mitm;
        proxy_connect_timeout 1s;
    }
    
    server {
        listen        3389;
        proxy_pass    privx_rdp-mitm;
        proxy_connect_timeout 1s;
    }

    server {
        listen        8443;
        proxy_pass    privx_cert;
        proxy_connect_timeout 1s;
    }
    
    server {
        listen 1080;
        proxy_pass privx_ssh-mitm-proxy;
        proxy_connect_timeout 1s;
    }
}

In production environments, we recommend active health checks against the monitor-service instance status endpoint. The previous example configuration does not enable active health checks, which are unsupported in the free version of Nginx.

To disconnect a PrivX server, append down to the server address. For example:

upstream privx_http {
    hash $remote_addr consistent;
    server 192.0.2.100:443;
    server 192.0.2.101:443 down;
}

Then reload the Nginx configuration with:

sudo systemctl reload nginx

To reconnect a PrivX server, remove the down keyword and reload the configuration.

Was this page helpful?