Skip to content

FAQ and Troubleshooting#

Privacera Manager#

Unable to Connect to Docker#

Problem: Inability to connect to Docker could be due to several different causes.

Solution: Please check the following -

  1. Make sure the user account running docker is part of the docker group. Test it by running the following Linux command:

    id
    Output: uid=1000(ec2-user) gid=1000(ec2-user) groups=1000(ec2-user),4(adm),10(wheel),190(systemd-journal),**991(docker)**
    
  2. Make sure that you have added the docker user to the docker group. See Adding OS user to docker group.

  3. If steps 1 and 2 don’t solve the issue, exit the shell and log back in.

Terminate Installation#

Problem: Privacera Manager is either not responding or taking too long to complete the installation process.

Cause: Either a bad connectivity to Docker hub, or an SSL related issue.

Solution: In the terminal, press CTRL+C or similar interrupt key sequence while ./privacera_manager.sh update is running. Privacera Manager will stop running, and provide a rollback of the installation and/or a warning about an incomplete installation.

Ansible Kubernetes Module does not load#

Problem: During the installation of EKS, the following exception is displayed by Ansible.

Exception

An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ModuleNotFoundError: No module named 'kubernetes' fatal: [privacera1]: FAILED! => changed=false error: No module named 'kubernetes' msg: Failed to import the required Python library (openshift) on ip-10-211-24-82.ec2.internal's Python /usr/bin/python3.

Solution: Restart the installation by running the Privacera Manager update.

Common Errors/Warnings in YAML Config Files#

When you run the yaml_check command, it analyzes the YAML files and displays any errors/warnings, if found. For more information on the command, see Verify YAML Config Files.

The following table lists the error/warning messages that will be displayed when you run the check.

Error/Warning Message Description Solution
warning too many blank lines (1 > 0) (empty-lines) There are empty lines in the YAML file. Review the file config/custom-vars/vars.xxx.yml and remove the empty lines
error too many spaces before colon (colons) Extra space(s) found in the variable before the colon(:) at line X. It needs to be removed. Review config/sample-vars/vars.xxx.yml and remove the space before quotes.
error string value is not quoted with any quotes (quoted-strings) A variable value at line X is not quoted. Review the file config/sample-vars/vars.xxx.yml' and add the quotes.
error syntax error: expected <block end>, but found '{' (syntax) Syntax errors found in the YAML file. Review the variables in file config/custom-vars/vars.xxx.yml, it could be a missing quote ( ' ) or bracket ( } ).
warning too few spaces before comment (comments) Space is missing between variable and comment. Comment should start after a single space or from the next line.
error duplication of key "AWS_REGION" in mapping (key-duplicates) Variable 'X' has been used twice in the file. Review the file and remove one of the duplicate variables.

Delete Old Unused Privacera Docker Images#

Every time you upgrade Privacera it pulls a docker image with the new upgraded version. Any unused images can take up unnecessary disk space. You can free this disk space by deleting all the old unused images.

Problem: You're trying to pull a new image, but you get the following error:

Error

2d473b07cdd5: Pull complete 
2253a1066f45: Extracting 1.461GB/1.461GB
failed to register layer: Error processing tar file(exit status 1): write /aws/dist/botocore/data/s3/2006-03-01/service-2.json: no space left on device

Solution

  1. List all the images available on the disk.

    docker images
    
  2. Remove all those images not associated with a container.

    docker image prune -a
    

Unable to Debug Error for an Ansible Task#

Problem: The Privacera installation/update fails due the following exception. You're unable to view/debug what the Ansible error is.

Exception

fatal: [privacera]: FAILED! => censored: 'the output has been hidden due to the fact that ''no_log: true'' was specified for this result'

Solution

  1. Create a no_log.yml file.

    vi ~/privacera/privacera-manager/config/custom-vars/no_log.yml
    
  2. Add the GLOBAL_NO_LOG property.

    GLOBAL_NO_LOG: "false"
    
  3. Run the update.

    cd ~/privacera/privacera-manager/
    ./privacera-manager.sh
    

Portal Service#

Remove the WhiteLabel Error Page error#

Problem: Privacera Portal cannot be accessed because of the WhiteLabel Error Page message being displayed.

Solution: To address this problem, you need to add the following properties:

  • SAML_MAX_AUTH_AGE_SEC
  • SAML_RESPONSE_SKEW_SEC
  • SAML_FORCE_AUTHN

To add these properties, perform the following steps:

  1. Run the following command.

    cd privacera/privacera-manager
    cp config/sample-vars/vars.portal.yml config/custom-vars
    vi config/custom-vars/vars.portal.yml
    
  2. Add the following properties with their values.

    SAML_MAX_AUTH_AGE_SEC: "7889400"
    SAML_RESPONSE_SKEW_SEC: "600"
    SAML_FORCE_AUTHN: "true"
    
  3. Run the update.

    cd ~/privacera/privacera-manager
    ./privacera-manager.sh update
    

Unable to Start the Portal Service#

Problem: Unable to start the Portal server, and it cannot start the Tomcat server. Due to this, the following log is generated:

liquibase.exception.LockException: Could not acquire change log lock. Currently locked by portal-f957f5997-jnb7v (100.90.9.218) 

Solution:

  1. Scale down Ranger, and Portal.

  2. Connect to your Postgres database. For example, privacera_db.

  3. Run the following command.

    UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null where ID=1;
    
  4. Close the database connection.

  5. Scale up Ranger.

  6. Scale up Portal.

Database Lockup in Docker#

Problem: Privacera services are not starting.

Cause: The database used by Privacera services could be locked up. This could happen due to an improper or abrupt shutdown of the Privacera Manager host machine.

Solution:

  1. SSH to the machine where Privacera is installed.

  2. SSH to the database container shell.

    cd privacera/docker
    ./privacera_services shell mariadb
    
  3. Run the following command. It will prompt for a password. This will give you access to the MySQL database.

    mysql -p
    
  4. List all the databases.

    show databases;
    
  5. From the list, select privacera_db database.

    use privacera_db;
    
  6. Query the DATABASECHANGELOGLOCK table. You will see that the value is 1 or greater under the LOCKED column.

  7. Remove the database lock.

    update DATABASECHANGELOGLOCK set locked=0, lockgranted=null, lockedby=null where id=1;
    commit;
    
  8. Exit MySQL shell.

    exit;
    
  9. Exit Docker container.

    exit
    
  10. Restart Privacera services.

    ./privacera_services restart
    

Grafana Service#

Unable to See Metrics on Grafana Dashboard#

Problem: You're unable to see metrics on the Grafana dashboard. When you check the logs, the following exception is displayed.

Exception

[console] Error creating stats.timers.view.graphite.errors.POST.median: [Errno 28] No space left on device

[console] Unhandled Error

Solution

Note

The solution steps are applicable to Grafana deployed in a Kubernetes environment.

Increase the persistent volume claim (PVC). Do the following;

  1. Open vars.grafana.yml.

    cd ~/privacera/privacera-manager/
    cp config/sample-vars/vars.grafana.yml config/custom-vars/
    vi config/custom-vars/vars.grafana.yml
    
  2. Add the GRAFANA_K8S_PVC_STORAGE_SIZE_MB and GRAPHITE_K8S_PVC_STORAGE_SIZE_MB two properties. The property values are in megabytes (MB).

    GRAFANA_K8S_PVC_STORAGE_SIZE_MB: “5000“
    GRAPHITE_K8S_PVC_STORAGE_SIZE_MB: “5000“
    
  3. Run the update.

    cd ~/privacera/privacera-manager/
    ./privacera-manager.sh
    

Audit Server#

Unable to view the audits#

Problem: You have configured Audit Server to receive the audits, but they are not visible.

Solution: Enable the application logs of Audit Server and debug the problem.

To debug the application logs of Audit Server, do the following:

  1. SSH to the instance as USER.

  2. Run the following command.

    cd ~/privacera/docker/
    vi privacera/auditserver/conf/log4j.properties
    
  3. At line 7, change INFO to DEBUG.

    log4j.category.com.privacera=DEBUG,logfile
    
  4. If you want to enable debugging outside the Privacera package, change line 4 from WARN to DEBUG.

    log4j.rootLogger=DEBUG,logfile
    
  5. Save the file.

  6. Restart Audit Server.

    ./privacera_services restart auditserver
    
  1. SSH to the instance as USER.

  2. Run the following command. Replace ${NAMESPACE} with your Kubernetes namespace.

    kubectl edit cm auditserver-cm-conf -n ${NAMESPACE}
    
  3. At line 47, edit the following property and change it to DEBUG mode.

    log4j.category.com.privacera=DEBUG,logfile
    
  4. At line 44, enable DEBUG at root level.

    log4j.rootLogger=DEBUG,logfile
    
  5. Save the file.

  6. Restart Audit Server.

    kubectl rollout restart statefulset auditserver -n ${NAMESPACE}
    

Audit Fluentd#

Unable to view the audits#

Problem: You have configured Audit Fluentd to receive the audit, but they are not visible.

Solution: Enable the application logs of Audit Fluentd and debug the problem.

To view the application logs of Audit Fluentd, do the following:

  1. SSH to the instance as User.

  2. Run the following commands.

    cd ~/privacera/docker
    ./privacera_services logs audit-fluentd -f
    
    kubectl logs audit-fluentd-0 -f -n $YOUR_NAMESPACE
    

Last update: August 24, 2021