Deploying to Kubernetes
This article describes setting up and maintaining PrivX on Kubernetes.
Setup on Kubernetes
It is always recommended to follow the PrivX Kubernetes Git repository for up to date documentation.
Create a Kubernetes cluster for PrivX. The cluster version must be 1.19 or later.
Create the privx namespace:
kubectl create namespace privx
Set up storage for PrivX data. This involves creating a persistent volume (PV), and a persistent-volume claim (PVC).
- For PrivX-configuration files: Create a PV and PVC with
accessModes
set to ReadWriteMany. The PVC should be named privx-claim, and located in the privx namespace.
- For PrivX-configuration files: Create a PV and PVC with
Set up PrivX-TLS and Authorizer secrets.
privx-tls: TLS-certificate data, required later for the ingress load balancer. The
type
of the secret must be kubernetes.io/tls. The secret must be in the privx namespace. The required secret contents are:- tls.crt: The TLS certificate.
- tls.key: The corresponding private key.
- ca.crt (optional): The certificate of the CA that issued the TLS certificate.
You may generate the privx-tls secret similarly to the following:
kubectl create secret tls privx-tls --cert=<Path to cert file> --key=<Path to key file> -n privx
privx-ca-secret: The Authorizer certificate, used to issue certificates for certificate-based authentication. The secret
type
must be Opaque. The secret must be in the privx namespace. The required secret contents are:- ca.crt: The CA certificate (can be valid for longer time).
You may generate the privx-ca-secret secret similarly to the following:
kubectl create secret generic privx-ca-secret --from-file=ca.crt=<Path to ca cert file> -n privx
Ensure that the secrets and their contents are named exactly as described.
Install the nginx-ingress controller, used for load balancing:
For Kubernetes version 1.19:helm repo add bitnami-full-index https://raw.githubusercontent.com/bitnami/charts/archive-full-index/bitnami helm install \ -n ingress --create-namespace \ -f values-overrides/ingress.yaml \ --version 7.6.6 \ ingress bitnami-full-index/nginx-ingress-controller
For Kubernetes > 1.19:
helm repo add bitnami https://charts.bitnami.com/bitnami helm install \ -n ingress --create-namespace \ -f values-overrides/ingress.yaml \ --version 9.3.0 \ ingress bitnami/nginx-ingress-controller
Up to date instructions on new releases and any breaking changes can be found in the original repo.
You will then need to set up the nginx-ingress load balancer with an external IP, and to add a DNS record for pointing to that IP. For provider-specific setup instructions, see the nginx-ingress Installation Guide.
The values-overrides/ingress.yaml
overrides some of the default values used in the helm chart for the Ingress controller. The file contains extra configurations and headers that are essential for PrivX to work correctly.
(Optional) We recommend installing Reloader to automatically update certificates upon renewals:
helm repo add stakater https://stakater.github.io/stakater-charts helm repo update helm install secrets-watcher stakater/reloader
In the reloader's absence, the trust has to be updated manually.
Install PrivX
Duplicate a file like
values-overrides/privx.yaml
for your environment and change theingress.common.hosts[*].host
to match the DNS Domain(s) that was(were) created earlier.By default, PrivX containers are run under uid and gid 5111. To run containers under a custom uid/gid, customize the following values in
charts/privx/values.yaml
:podSecurityContext.fsGroup
podSecurityContext.runAsUser
securityContext.runAsUser
The uid/gid cannot be changed after initial setup.
PrivX uses Postgres database for relational transactions and internal notifications. Therefore, it is important that a database is available either internally in the cluster or outside of the cluster.
Once the database is present, the following values in the filevalues-overrides/privx.yaml
should be changed for privx to be able to reach and interact with the database.- db.address (database address)
- db.port (database port)
- db.name (database name)
- db.sslmode (database sslmode (optional: default value=
require
)) - db.admin.name (database admin username)
- db.admin.password (database admin password)
- db.user.name (database privx user)
- db.user.password (database privx user password)
All database settings are mandatory except:
db.sslmode
db.admin.name
anddb.admin.password
(Mandatory if privx database has not been pre-created)
Replace the placeholder value of the license in
ms.licensemanager.licenseCode.prod.value
.
PRIVX_RELEASE_NAME=privx
VALUE_OVERRIDES=values-overrides/privx.yaml
helm install \
-f $VALUE_OVERRIDES \
-n privx \
$PRIVX_RELEASE_NAME charts/privx/
PrivX is inaccessible unless the hosts[*].host
settings matche the DNS domain(s).
After successful installation, PrivX should be accessible from the browser using the domain(s).
The helm chart creates and installs the following:
Creates configmaps that store information about privx internal settings (Only done when installing
the helm chart for the first time. Upgrades have no effect).Runs the installer job, which creates all the necessary resources for PrivX
to run (for example volume mounts for directories like /opt/privx/, creates db etc).
(Only done when installing the helm chart for the first time.
Upgrades have no effect).Creates deployments and pods for all microservices.
Creates kubernetes services for each microservices.
Creates ingress resources for accessing pods and services from outside
the cluster.
Upgrade on Kubernetes
Determine the current Helm revision of your PrivX release:
helm history --namespace privx privx
The last entry in the command is the current revision, usually the one with the
STATUS=deployed
.Create a volume claim named
privx-backup-claim
in theprivx
namespace for PrivX backups.Stop and back up PrivX with the following (replace
<new_privx_version>
with the version number you are upgrading to):helm upgrade --history-max 0 \ -f values-overrides/privx.yaml \ -f charts/privx/migrations/<new_privx_version>/stage1.yaml \ --wait privx -n privx charts/privx/
This creates the backup to the volume claimed by
privx-backup-claim
. The backup directory is named according to the following format:privx-backup-PPPPP_YYYY-MM-DD-hhmm_<old_privx_version>
At this point, back up the PrivX database according to your database vendor's instructions.
After the database and PrivX are successfully backed up, upgrade PrivX with the following (replace
<new_privx_version>
with the major version number you are upgrading to):helm upgrade --history-max 0 \ -f values-overrides/privx.yaml \ -f charts/privx/migrations/<new_privx_version>/stage2.yaml \ privx -n privx charts/privx/
This completes upgrade by updating PrivX configurations and starting new versions of the PrivX microservices.
Rolling Back Failed Upgrade on Kubernetes
If upgrade fails at any point, follow these instructions to get back to the original revision of PrivX:
Stop PrivX:
helm upgrade --history-max 0 \ -f values-overrides/privx.yaml \ --set shutdown=true \ --wait privx -n privx charts/privx/
Restore the database from backup.
In
restore.yaml
, setBACKUP_DIR
to the name of your backup directory (named similarly toprivx-backup-PPPPP_YYYY-MM-DD-hhmm_<old_privx_version>
). Make sure the correct backup directory is used. The backup directory name can also be copied from the logs of the backup job by running the following command:kubectl logs -n privx <name-of-the-backup-pod>
Restore PrivX settings using the following command (replace
<new_privx_version>
with the major version number of PrivX that you tried upgrading to):helm upgrade --history-max 0 \ -f values-overrides/privx.yaml \ -f charts/privx/migrations/<new_privx_version>/restore.yaml \ privx -n privx charts/privx/
Roll back to the original PrivX revision (replace
<revision-number>
with the revision obtained from the first upgrade step):helm rollback --namespace privx privx <revision-number>
Your PrivX deployment should now be restored to the pre-upgrade version.