Skip to main content

Privacera Documentation

High Availability for Privacera Portal on AWS with Kubernetes

This topic shows how to configure High Availability (HA) for Privacera Portal on AWS with Kubernetes. Under a normal working environment, the core Privacera services such as Solr, MariaDB, Dataserver, Zookeeper, and Ranger connect to a Portal service. Configuring HA for Privacera Portal ensures that the Portal service is always up and running.

Note

Portal HA is supported only in a Kubernetes environment.

A high-availability Kubernetes cluster is created with multiple pods in a typical master-slave setup, with each pod running a Portal service. If one pod goes down, the other pod takes over, ensuring Portal service continuity.

Zookeeper determines which pod/node would be Master. In a three pod setup, Zookeeper automatically elects a pod as a master node and the remaining pods as slaves.

Prerequisites

Ensure the following prerequisites are met:

Procedure
  1. SSH to an instance as USER.

  2. Edit the cluster size (replicas) of Zookeeper and Solr.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/vars.kubernetes.yml config/custom-vars/
    vi config/custom-vars/vars.kubernetes.yml          
  3. Change the value of the properties from 1 to 3.

    ZOOKEEPER_CLUSTER_SIZE:1SOLR_K8S_CLUSTER_SIZE:1 
  4. Run the following commands.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/vars.portal.kubernetes.ha.yml config/custom-vars/
    vi config/custom-vars/vars.portal.kubernetes.ha.yml          
  5. Edit the following properties or keep them unchanged.

    PRIVACERA_PORTAL_K8S_HA_ENABLE:"true"PORTAL_K8S_REPLICAS :"3"          

    Property

    Description

    Example

    PRIVACERA_PORTAL_K8S_HA_ENABLE

    Activates the HA mode for Portal service.

    true

    PORTAL_K8S_REPLICAS

    Enter an odd number of nodes/pods to be created.

    Zookeeper that manages the nodes/pods requires an odd number to elect a master node successfully.

    Note

    A minimum of 3 nodes is required in HA mode. By giving a value of 1 will turn it into a non-HA mode.

    3

  6. Run the following commands. In an HA mode the Privacera Portal is accessed through a browser. Therefore, a sticky session is required. For that, AWS load balancer ingress has been implemented.

    cd ~/privacera/privacera-manager
    cp config/sample-vars/vars.aws.alb.ingress.yml config/custom-vars/          
  7. Run the following commands.

    cd ~/privacera/privacera-manager
    ./privacera-manager.sh update          

    Since 3 nodes are set in the PORTAL_K8S_REPLICAS property, it will create 3 pods/nodes of the Portal service.

At the end of the update, the service URLs are provided as shown below. The external Portal URL is an ingress URL that can be used in a browser to access Privacera Portal.

SOLR:
INTERNAL - http://solr-service:8983
EXTERNAL - http://internal-affid84410d554245a0b778a7ceb8b4e-1751579676.us-east-1.elb.amazonaws.com:8983

PORTAL:
INTERNAL - http://portal:6868
EXTERNAL - http://internal-8cd5de6d-kdev3-portalingre-c932-502762960.us-east-1.elb.amazonaws.com:6868

RANGER:
INTERNAL - http://ranger:6086
EXTERNAL - http://internal-a881ca4ceb4294457a597f4df29e7e6d-175787883.us-east-1.elb.amazonaws.com:6080

DATASERVER:
INTERNAL - http://dataserver:8181
EXTERNAL - http://internal-aae93a7fc6bd64cd68b38bdc0ecb2b6f-655870724.us-east-1.elb.amazonaws.com:8181

Add Solr replicas

After the Portal service is up and running, run the following command to update the Solr replication on the other nodes:

cd ~/privacera/privacera-manager
cd output/solr/
./update_solr_replication.sh --add_replica

Set replicas for other Privacera services

To set the replicas for the services such as Ranger, Dataserver, and Auditserver, add the following in the config/custom-vars/vars.kubernetes.yml file.

  • For Ranger

    RANGER_K8S_REPLICAS:"3"          
  • For Dataserver

    DATASERVER_K8S_CLUSTER_SIZE:"3"          
  • For AuditServer

    AUDITSERVER_K8S_REPLICAS:"3"