High-Availability Deployment

PrivX high availability deployment can be done for on-prem or cloud environments. For AWS based deployments, see CDK based installation scripts. at https://github.com/SSHcom/privx-on-aws

Load Balancer

For high availability installations, we recommend a load balancer in front of PrivX server instances. The recommendation is to use HTTPS load balancer, which terminates the TLS certificate and forwards traffic to PrivX instances. This way only the load balancer requires an official TLS certificate, which simplifies the installation. Using PrivX components (Extender, Carrier, Web-Proxy) with PrivX HA setup require using HTTPS load balancer with sticky sessions/affinity cookies.

Officially tested cloud provider load balancers include AWS, Azure and Google load balancers.
PrivX components have been preconfigured to work with AWS, Google, Azure, Kubernetes ingress and Nginx. Other load balancers might require additional settings.

For on-premises installations, most HTTPS load balancers (Nginx Plus, HAProxy, Netscaler etc) will work, as long as they support sticky-session cookies and preferrably active health checks.

Example Nginx Load-Balancer Configuration
Example HAProxy Load-Balancer Configuration

For Extender, Carrier and Web-Proxy components, Round-Robin routing algorithm needs to be used instead of least-connections. If you don't need any of these PrivX components, any load balancing algorithm or network load balancer will be acceptable.

For native SSH Bastion and native RDP client traffic a TCP load balancer or DNS load balancing is required. It is also possible to use two load balancers for the same servers, HTTPS load balancer for web UI and PrivX components traffic and TLS network load balancer for native clients.
If using optional client certificate authentication for web UI, a TLS load balancer need to be used for that as well.

For health checks and autoscaling, we recommend using monitor-service instance status endpoint to check the availability of your PrivX instance. For example, https://<privx_lb_endpoint>/monitor-service/api/v1/instance/status.

It is also possible to use other microservice status endpoints (https://<privx_lb_endpoint>/<servicename>/api/v1/status) for this purpose, but it is recommended to use active health checks against monitor-service status endpoint to avoid unexpected service downtime.

Load Balancing PrivX Components

For load balancing PrivX components, both Extender and web components (Carrier+Web Proxy) can be grouped using Routing Prefix setting in PrivX UI. For example, setting the same Routing Prefix variable for two or more different Extenders will group them logically, allowing PrivX to load balance between the components. Load balancing algorithm (least connections or round-robin) can be changed in PrivX configuration.
Note that this feature still requires using separate configuration files for each component. Sharing config files or credentials between components is not currently supported.

Standard HA installation

  1. A load balancer directs traffic to a specific PrivX application server. The load balancer keeps track of PrivX application server statuses and removes non-working ones from the pool. The load balancing can be based on either source address or sticky session. Sticky session/session affinity cookie configuration for load balancer is recommended. Sticky load balancer cookies with round-robin routing algorithm are required, if using PrivX-Extender, PrivX-Carrier and PrivX-Web-Proxy components and there's more than one PrivX node.

  2. A PrivX application server consists of an Nginx reverse proxy and a number of PrivX microservices. Nginx reverse proxy also serves the PrivX HTML5 UI static resources for the requesting clients. The PrivX microservices offer REST APIs over HTTPS. The PrivX application servers store all persistent data in a database - once a PrivX application server has been configured, it is just a matter of taking a snapshot of the server and deploying a new instance of the snapshot to add application nodes.
    (Deprecated) The PrivX microservices may optionally use Redis in-memory cache (instead of the PostgreSQL database) to sync state between themselves. We only recommend Redis in environments where PostgreSQL database performance is limited.
    If opting to use Redis, it should be configured in clustered mode. PrivX will require write access to Redis master node, so it should not be connected directly to replica nodes.

  3. The PrivX microservices persist data to PostgreSQL.

  4. PostgreSQL should also be highly available, configured in master<>slave mode or as a cluster.
    PrivX also supports various Hardware Security Modules (optional) for storing keys to separate HSM.
    The PostgreSQL database should be configured to clustered mode. If the database master dies, a
    DNS failover PrivX application servers can be configured to contact the replica via configuration change or a DNS change.

  5. The trail storage (by default /var/privx/audit) should be mounted on NFS or a secure NAS device and shared between PrivX instances. To avoid file permission issue, make sure to use the same privx user and group IDs in all PrivX application servers.

PrivX-Server HA Setup

These high-availability (HA) installation instructions apply to non-cloud environment. For cloud HA deployment, please see Deploying PrivX to Amazon Web Services

The high-level steps for setting up PrivX in HA configuration involves:

  1. Setting up one initial PrivX node.
  2. Setting up additional PrivX servers by duplicating the initial PrivX node.

Initial PrivX Node

  1. Install PostgreSQL master and slave nodes and configure a replication model which best suits your environment (vendor instructions at https://www.postgresql.org/docs/16/different-replication-solutions.html)

  2. Install PrivX from the PrivX repository:

    sudo yum install PrivX

    Or from RPM package:

    sudo yum install PrivX-*.x86_64.rpm
  3. Run post-install configuration with:

    sudo /opt/privx/scripts/postinstall.sh

    During postinstall, instruct PrivX to connect to the external database and Redis.

  4. Verify that the installation works by connecting to the initial PrivX node via browser.

The initial PrivX node is ready.

Additional PrivX Nodes

If using physical servers

Create a backup of the initial PrivX node, then use the backup to create additional nodes.

  1. On the initial PrivX node, create a backup by running:

    sudo /opt/privx/scripts/backup.sh

    This creates a backup to /var/backups/privx/hostname_yyyy-mm-dd-hhmm

  2. On all your your additional nodes, set up PrivX using the backup:

    1. Transfer the backup directory to the node.

    2. Install PrivX without running postinstall. If installing from repository:

      sudo export SKIP_POSTINSTALL=1
      sudo yum install PrivX

      If installing from RPM package:

      sudo export SKIP_POSTINSTALL=1
      sudo yum install PrivX-*.x86_64.rpm
    3. Duplicate PrivX setup from backup by running:

      sudo /opt/privx/scripts/restore.sh /path/to/backup/directory/from/node/one/hostname_yyyy-mm-dd-hhmm` 
    4. Finalize setup by running postinstall:

      sudo /opt/privx/scripts/postinstall.sh

    Repeat these steps on all additional nodes.

If using virtual machines

After installing PrivX to the first node, you can clone the node and use the cloned instance to launch additional PrivX nodes.

To ensure that all machines have unique machine IDs, you may need to regenerate it on cloned machines.

Extender HA Setup

Extenders are grouped into HA clusters by Routing prefix: Any Extenders with similar Routing prefixes belong in the same HA cluster. When an Extender is down, connections are established via other available Extenders from the HA cluster.

To set up Extenders in a HA configuration:

  • Set up two or more Extenders. All Extenders in the HA cluster must have the same Routing prefix. For more information about setting up Extenders, see PrivX Extender Setup.

Carrier and Web-Proxy HA Setup

PrivX Carriers and PrivX Web Proxies are grouped into HA clusters by Routing prefix. Any Carriers and Web Proxies pairs with similar Routing prefixes belong in the same HA cluster. When a Carrier/Web-Proxy pair is down, connections are established via other available Carrier/Web-Proxy pairs from the HA cluster.

To set up Carriers and Web Proxies in HA configuration:

Just like Extenders, each Carrier/Web-Proxy pair requires their own credentials and configuration file. You will need to manually ensure that all configurations for the same HA cluster have the same Routing prefix.

High-Availability Upgrade

These high-availability (HA) upgrade instructions apply to non-cloud environment, but may be adapted to some cloud-HA setups as well.

Ensure the PrivX database has enough free space before upgrade: Migrations during upgrade may temporarily triple the database size. If necessary, you may reduce the database size before upgrade with Data Retention settings.

We strongly recommend backing up the PrivX database and PrivX server config files before upgrade, to allow restoring the previous version in case of a failed upgrade.

You can upgrade your HA deployment in either of the following ways:

  • Zero-Downtime Upgrade: Upgrade while allowing users to log in and connect to hosts. Only allows upgrading to later minor releases or the next major version.
  • HA Upgrade with downtime: PrivX will be unavailable during upgrade. Use this to upgrade over multiple major versions at once.

Zero-Downtime Upgrade

High-level steps for Zero-Downtime Upgrade (ZDU) involves:

  1. Upgrading each PrivX Server.

  2. Finalizing upgrade.

ZDU is only supported if your current PrivX version is 32 or later.

ZDU only allows upgrading to the following versions:

  • Later minor releases within the same major version (such as from 33.0 to 33.2)
  • The next major version (such as from 32.x to 33.x).

If your current PrivX version is too old, or if you need to upgrade over multiple major versions at once, use HA Upgrade with Downtime instead.

To upgrade PrivX Servers, perform the following on each PrivX Server:

  1. Save the following upgrade script to a file on the PrivX Server named upgrade_first_stage.sh

    #!/bin/bash
    
    UPGRADE_STYLE=no_downtime yum install "$@"

    Ensure that the script can be run as root (replace path/to/upgrade_first_stage.sh with the path where you saved the script):

    sudo chown root:root path/to/upgrade_first_stage.sh
    sudo chmod u+x path/to/upgrade_first_stage.sh
  2. If you are installing from RPM (instead of PrivX repository), copy the RPM package to the PrivX Server.

  3. Run the previously-created upgrade script to install the new packages.

    • If upgrading from PrivX repository (replace path/to/upgrade_first_stage.sh with the path to the upgrade script created earlier, replace xx.x with the version you are upgrading to):

      sudo path/to/upgrade_first_stage.sh PrivX-xx.x
    • If upgrading from RPM (replace path/to/upgrade_first_stage.sh with the path to the upgrade script created earlier, replace path/to/PrivX.rpm with the path where you saved the RPM package):

      sudo path/to/upgrade_first_stage.sh path/to/PrivX.rpm

    This upgrades the current PrivX Server. Repeat these steps on the remaining PrivX Servers.

    Once upgrade on the first PrivX Server is complete, your PrivX deployment will be in maintenance mode: users can log into PrivX and connect to hosts. Other PrivX features are disabled until ZDU is finalized.

    Upgrade disconnects all ongoing connections handled by the current PrivX Server; users may reconnect immediately via other PrivX Servers while ZDU is in progress.

After all PrivX Servers are upgraded, finalize upgrade by running the following on any PrivX Server:

sudo /opt/privx/scripts/upgrade_second_stage.sh

If performing ZDU from version 32, the upgrade_second_stage.sh script won't be available until after you have run upgrade_first_stage.sh

Allow the command some time to finish.

Once the command finishes, ZDU is complete. Your PrivX deployment should be upgraded, and all PrivX features should be available again.

HA Upgrade with Downtime

  1. Choose a PrivX server for performing upgrade on. This can be any PrivX server in the deployment. In these instructions we call this the primary node. Other PrivX servers are called secondary nodes.

  2. On all secondary nodes, stop the PrivX service with:

    sudo systemctl stop privx

    This prevents secondary nodes from writing conflicting data to the database while the primary node is being upgraded.

  3. On the primary node, upgrade PrivX:

    1. Upgrade PrivX from the repository:

      sudo yum install PrivX

      Or from RPM package:

      sudo yum install PrivX-*.x86_64.rpm

      The command upgrades the PrivX database, and the PrivX software on the primary node. Allow the command some time to finish.

    2. Verify that the node works before proceeding.

On upgrade, postinstall is run automatically. This also automatically restarts PrivX services on the node.

  1. Upgrade the PrivX software on all secondary nodes:

    • If your PrivX servers use individual installation directories (default), upgrade and configure the nodes with sudo yum install PrivX or sudo yum install PrivX-*.x86_64.rpm.

    • If your PrivX servers use a shared NFS-mounted installation directory, you only need to configure the nodes by running sudo /opt/privx/scripts/postinstall.sh on them.

    Secondary nodes can be upgraded in parallel.

Was this page helpful?