HomeDocumentationAPI Reference
Log In
These docs are for v16. Click to read the latest docs for v33.

High-Availability Deployment

PrivX high availability deployment can be done for on-prem or cloud environments. For AWS based deployments, see CDK based installation scripts. at [https://github.com/SSHcom/privx-on-aws] (https://github.com/SSHcom/privx-on-aws)

Load balancer

For high availability installations, it's recommended to use load balancer in front of PrivX server instances. The recommendation is to use HTTPS load balancer, which terminates the TLS certificate and forwards traffic to PrivX instances. This way only the load balancer requires official TLS certificate, which simplifies the installation. PrivX web UI and additional PrivX components (Extender, Carrier, Web-Proxy) require using sticky sessions/affinity cookies.

Officially tested cloud provider load balancers include AWS, Azure and Google load balancers.
PrivX components have been preconfigured to work with AWS, Google, Azure and Nginx. Other load balancers might require additional settings.

For on-prem installations, most HTTPS load balancers (Nginx, HAProxy, Netscaler etc) will work, as long as they support sticky session cookies. For Extender, Carrier and Web Proxy components, Round-robin routing algorithm need to be used instead of least-connections. If you don't need PrivX components, any load balancing algorithm will be acceptable.

For native SSH Bastion and native RDP client traffic a TCP load balancer or DNS load balancing is required. It is also possible to use two load balancers for the same servers, HTTPS load balancer for web UI and PrivX components traffic and TLS network load balancer for native clients.
If using optional client certificate authentication for web UI, a TLS load balancer need to be used for that as well.

For health checks and autoscaling, it's recommended to use any microservice status endpoint to check the availability of your PrivX instance. For example, https://<privx_lb_endpoint>/auth/api/v1/status
Monitoring root web page monitors the status of Nginx server instead of PrivX, which is not a recommended solution.

Example Nginx load balancer configuration

Load balancing PrivX components

For load balancing PrivX components, both Extender and web components (Carrier+Web Proxy) can be grouped using Routing Prefix setting in PrivX UI. For example, setting the same Routing Prefix variable for two or more different Extenders will group them logically, allowing PrivX to load balance between the components. Load balancing algorithm (least connections or round-robin) can be changed in PrivX configuration.
Note that this feature still requires using separate configuration files for each component. Sharing config files or credentials between components is not currently supported.

Standard HA installation

1737

Components

  1. A load balancer directs traffic to a specific PrivX application server. The load balancer keeps track of PrivX application server statuses and removes non-working ones from the pool. The load balancing can be based on either source address or sticky session. Sticky session/session affinity cookie configuration for load balancer is recommended. Sticky load balancer cookies with round-robin routing algorithm are required, if using PrivX-Extender, PrivX-Carrier and PrivX-Web-Proxy components and there's more than one PrivX node.

  2. A PrivX application server consists of an Nginx reverse proxy and a number of PrivX microservices. Nginx reverse proxy also serves the PrivX HTML5 UI static resources for the requesting clients. The PrivX microservices offer REST APIs over HTTPS. The PrivX application servers store all persistent data in a database - once a PrivX application server has been configured, it is just a matter of taking a snapshot of the server and deploying a new instance of the snapshot to add application nodes.

  3. The PrivX microservices use Redis in-memory cache to sync state between themselves - Redis is used only to trigger updates which are done via REST calls. To ensure high-availability, Redis should be configured in clustered mode. PrivX will require write access to Redis master node, so it should not be connected directly to replica nodes.

  4. The PrivX microservices persist data to PostgreSQL. PostgreSQL should also be highly available, configured in master<>slave mode or as a cluster.

  5. PrivX also supports various Hardware Security Modules (optional) for storing keys to separate HSM.

  6. The PostgreSQL database should be configured to clustered mode. If the database master dies, a
    DNS failover PrivX application servers can be configured to contact the replica via configuration change or a DNS change.

  7. The trail storage (by default /var/privx/audit) should be mounted on NFS or a secure NAS device and shared between PrivX instances. (Optional: It is also possible to use shared storage for /opt/privx -folder to before installing PrivX to make updates easier in the future).

Installation

📘

Note

These high-availability (HA) installation instructions apply to non-cloud environment. For cloud HA deployment, please see Deploying PrivX to Amazon Web Services

PrivX node 1

  1. Install PostgreSQL master and slave nodes and configure a replication model which best suits your environment (vendor instructions at https://www.postgresql.org/docs/11/different-replication-solutions.html)

  2. Install Redis and configure Redis cluster (vendor instructions at https://redis.io/topics/cluster-tutorial).

  3. Install PrivX on node 1 by running yum install PrivX-*.x86_64.rpm or install from PrivX repository yum install PrivX

  4. Run /opt/privx/scripts/postinstall.sh and instruct PrivX to connect to an external database and Redis configured above

  5. Verify that the installation works by connecting to node 1 via browser

Node 1 is ready.

PrivX node 2+

If using physical servers

  1. On node 1, take backup of the PrivX first node by running /opt/privx/scripts/backup.sh

  2. Transfer the backup directory from /var/backups/privx/hostname_yyyy-mm-dd-hhmm on node 1 to node 2

  3. On node 2, disable automatic execution of postinstall.sh by setting environment variable: export SKIP_POSTINSTALL=1

  4. Install PrivX on node 2 by running yum install PrivX-*.x86_64.rpm or install from PrivX repository yum install PrivX

  5. On node 2, run /opt/privx/scripts/restore.sh /path/to/backup/directory/from/node/one/hostname_yyyy-mm-dd-hhmm

  6. On node 2, run /opt/privx/scripts/postinstall.sh

Node 2 is ready.

Repeat above steps on subsequent nodes.

If using virtual machines

After installing PrivX to the first node, you can clone the node and use the cloned instance to launch additional PrivX nodes.

Upgrade

📘

Note

These high-availability (HA) upgrade instructions apply to non-cloud environment, but can be in some cases used for upgrading cloud HA setup as well.

PrivX node 2+

Stop the PrivX service service privx stop

PrivX node 1

  1. Upgrade node 1, run yum install PrivX-*.x86_64.rpm or install from PrivX repository yum install PrivX
  2. Verify that the node works

📘

Note

On upgrade, postinstall.sh is run automatically

PrivX node 2+

If not using shared /opt/privx -folder (default):

  1. Take backup of the PrivX node 1 by running /opt/privx/scripts/backup.sh
  2. Transfer the latest backup directory from /var/backups/privx/hostname_yyyy-mm-dd-hhmm on node 1 to node 2
  3. On node 2, disable automatic execution of postinstall.sh by setting environment variable: export SKIP_POSTINSTALL=1
  4. Upgrade node 2, run yum install PrivX-*.x86_64.rpm or install from PrivX repository yum install PrivX
  5. On node 2, run /opt/privx/scripts/restore.sh /path/to/backup/directory/from/node/one/hostname_yyyy-mm-dd-hhmm
  6. On node 2, run /opt/privx/scripts/postinstall.sh

If using shared NFS-mounted /opt/privx (optional):

  1. After upgrading node 1, run /opt/privx/postinstall.sh

Repeat the steps for all subsequent nodes and PrivX is installed and ready to be configured and used.

Backup & restore

PrivX automatically creates daily full backups (certificates and configuration files) and stores them in /var/backups/privx. Ideally, this directory should be mounted on a secure NAS solution or periodically backed up to a secure location.

📘

Note

PrivX does not automatically backup Postgres databases on external database. Backing up the external Postgres database is outside of the scope of this document. You should always take a backup of your database data before upgrading PrivX version.

To restore a backed up set of data to PrivX follow these steps:

  1. Transfer the backup directory from /var/backups/privx/hostname_yyyy-mm-dd-hhmm to PrivX node 1
  2. On node 1, disable automatic execution of postinstall.sh by setting environment variable: export SKIP_POSTINSTALL=1
  3. Install PrivX on node 1 by running yum install PrivX-<version>.x86_64.rpm or install from PrivX repository yum install PrivX-<version>.x86_64

🚧

Important

PrivX version to be installed here must match the PrivX version used for the backups you're trying to restore.

  1. On node 1, run /opt/privx/scripts/restore.sh /path/to/backup/directory/from/node/one/hostname_yyyy-mm-dd-hhmm
  2. On node 1, run /opt/privx/postinstall.sh
  3. Ensure that the PrivX node is functional

Repeat the steps for other nodes.