- PrivaceraCloud Release 7.4
- Enhancements and updates in PrivaceraCloud release 7.4
- Known Issues in PrivaceraCloud 7.4
- PrivaceraCloud User Guide
- Overview of PrivaceraCloud
- Connect applications with the setup wizard
- Connect applications
- About applications
- Connect Azure Data Lake Storage Gen 2 (ADLS) to PrivaceraCloud
- Connect Amazon Textract to PrivaceraCloud
- Athena
- Privacera Discovery with Cassandra
- Connect Databricks to PrivaceraCloud
- Databricks SQL
- Databricks SQL Overview and Configuration
- Planning and general process
- Prerequisites
- Databricks SQL with Privacera Hive
- Connect Databricks SQL application
- Grant Databricks SQL permissions to PrivaceraCloud users
- Define a resource policy
- Test the policy
- Databricks SQL PolicySync fields
- Configuring column-level access control
- View-based masking functions and row-level filtering
- Create an endpoint in Databricks SQL
- Databricks SQL Fields
- Databricks SQL Hive Service Definition
- Databricks SQL Masking Functions
- Databricks SQL Encryption
- Use a custom policy repository with Databricks
- Connect Databricks SQL to Hive policy repository on PrivaceraCloud
- Databricks SQL Overview and Configuration
- Connect Databricks Unity Catalog to PrivaceraCloud
- Connect S3 to PrivaceraCloud
- Prerequisites in AWS console
- Connect S3 application to PrivaceraCloud
- Enable Privacera Access Management for S3
- Enable Data Discovery for S3
- S3 AWS Commands - Ranger Permission Mapping
- S3
- AWS Access with IAM
- Access AWS S3 buckets from multiple AWS accounts
- Add UserInfo in S3 Requests sent via Dataserver
- Control access to S3 buckets with AWS Lambda function on PrivaceraCloud
- Dremio Plugin
- DynamoDB
- Connect Elastic MapReduce from Amazon application to PrivaceraCloud
- Connect EMR application
- EMR Spark access control types
- PrivaceraCloud configuration
- AWS IAM roles using CloudFormation setup
- Create a security configuration
- Create EMR cluster
- How to configure multiple JSON Web Tokens (JWTs) for EMR
- EMR Native Ranger Integration with PrivaceraCloud
- Connect EMRFS S3 to PrivaceraCloud
- Files
- GBQ
- Google Cloud Storage
- Connect Glue to PrivaceraCloud
- Google BigQuery for PolicySync
- Connect Kinesis to PrivaceraCloud
- Connect Lambda to PrivaceraCloud
- Microsoft SQL Server
- MySQL for Discovery
- Open Source Apache Spark
- Oracle for Discovery
- PostgreSQL
- Connect Power BI to PrivaceraCloud
- Presto
- Redshift
- Snowflake
- Starburst Enterprise with PrivaceraCloud
- Starburst Enterprise Presto
- Trino
- Connect users
- Data access Users, Groups, and Roles
- UserSync
- Portal user LDAP/AD
- Datasource
- Okta Setup for SAML-SSO
- Azure AD setup
- SCIM Server User-Provisioning
- User Management
- Identity
- Access Manager
- Access Manager
- Resource Policies
- Tag Policies
- Scheme Policies
- Service Explorer
- Reports
- Audit
- About data access users, groups, and roles resource policies
- Security zones
- Discovery
- Classifications via random sampling
- Privacera Discovery scan targets
- Propagate Privacera Discovery Tags to Ranger
- Enable offline scanning on Azure Data Lake Storage Gen 2 (ADLS)
- Enable Real-time Scanning of S3 Buckets
- Enable Real-time Scanning on Azure Data Lake Storage Gen 2 (ADLS)
- Enable Discovery Realtime Scanning Using IAM Role
- Encryption
- Overview of Privacera Encryption
- Encryption schemes
- Presentation schemes
- Masking schemes
- Create scheme policies
- Privacera-supplied encryption schemes for the Privacera API
- Privacera-supplied encryption schemes for the Bouncy Castle API
- API date input formats
- Deprecated encryption formats, algorithms, and scopes
- Privacera Encryption REST API
- PEG API endpoint
- PEG REST API encryption endpoints
- Prerequisites
- Common PEG REST API fields
- Construct the datalist for the /protect endpoint
- Deconstruct the response from the /unprotect endpoint
- Example data transformation with the /unprotect endpoint and presentation scheme
- Example PEG API endpoints
- Make encryption API calls on behalf of another user
- Privacera Encryption UDF for masking in Databricks on PrivaceraCloud
- Privacera Encryption UDFs for Trino on PrivaceraCloud
- Syntax of Privacera Encryption UDFs for Trino
- Prerequisites for installing Privacera Crypto plug-in for Trino
- Download and install Privacera Crypto jar
- Set variables in Trino etc/crypto.properties
- Restart Trino to register the Privacera encryption and masking UDFs for Trino
- Example queries to verify Privacera-supplied UDFs
- Privacera Encryption UDF for masking in Trino on PrivaceraCloud
- Encryption UDFs for Apache Spark on PrivaceraCloud
- Launch Pad
- Settings
- Dashboard
- Usage statistics
- Operational status of PrivaceraCloud and RSS feed
- How to Get Support
- Coordinated Vulnerability Disclosure (CVD) Program of Privacera
- Shared Security Model
- PrivaceraCloud Previews
- Preview: File Explorer for S3
- Preview: File Explorer for Azure
- Preview: File Explorer for GCS
- Preview: Scan Generic Records with NER Model
- Preview: Scan Electronic Health Records with NER Model
- Preview: OneLogin setup for SAML-SSO
- Preview: Azure Active Directory SCIM Server UserSync
- Preview: OneLogin UserSync
- Preview: PingFederate UserSync
- Quickstart for Databricks Unity Catalog on PrivaceraCloud
- What do I need to do in my Databricks Workspace?
- Where is the sample dataset in my Databricks Workspace?
- What should I do in the PrivaceraCloud web portal?
- Access use-case - How do I give a user access to a table or restrict from running a SQL select query?
- Access use-case - How do I restrict a user from seeing contents of a column in the result of a SQL select query?
- Column masking use-case - How do I restrict a user from seeing contents of a column by masking the values in the result of a SQL select query?
- Access use-case - How do I disallow a user from seeing certain rows of a table?
- PrivaceraCloud documentation changelog
Configure AWS RDS PostgreSQL instance for access audits
You can configure your AWS account to allow Privacera to access your RDS PostgreSQL instance audit logs through Amazon cloudWatch logs. To enable this functionality, you must make the following changes in your account:
Update the AWS RDS parameter group for the database
Create an AWS SQS queue
Specify an AWS Lambda function
Create an IAM role for an EC2 instance
Update the AWS RDS parameter group for the database
To expose access audit logs, you must update configuration for the data source.
Procedure
To create a role for audits, run the following SQL query with a user with administrative credentials for your data source:
CREATE ROLE rds_pgaudit;
Create a new parameter group for your database and specify the following values:
Parameter group family: Select a database from either the
aurora-postgresql
orpostgres
families.Type: Select DB Parameter Group.
Group name: Specify a group name for the parameter group.
Description: Specify a description for the parameter group.
Edit the parameter group that you created in the previous step and set the following values:
pgaudit.log
: Specifyall
, overwriting any existing value.shared_preload_libraries
: Specifypg_stat_statements,pgaudit
.pgaudit.role
: Specifyrds_pgaudit
.
Associate the parameter group that you created with your database. Modify the configuration for the database instance and make the following changes:
DB parameter group: Specify the parameter group you created in this procedure.
PostgreSQL log: Ensure this option is set to enable logging to Amazon cloudWatch logs.
When prompted, choose the option to immediately apply the changes you made in the previous step.
Restart the database instance.
Verification
To verify that your database instance logs are available, complete the following steps:
From the Amazon RDS console, View the logs for your database instance from the RDS console.
From the CloudWatch console, complete the following steps:
Find the
/aws/rds/cluster/*
log group that corresponds to your database instance.Click the log group name to confirm that a log stream exists for the database instance, and then click on a log stream name to confirm that log messages are present.
Create an AWS SQS queue
To create an SQS queue used by an AWS Lambda function that you will create later, complete the following steps.
From the AWS console, create a new Amazon SQS queue with the default settings. Use the following format when specifying a value for the Name field:
privacera-postgres-<RDS_CLUSTER_NAME>-audits
where:
RDS_CLUSTER_NAME
: Specifies the name of your RDS cluster.
After the queue is created save the URL of the queue for use later.
Specify an AWS Lambda function
To create an AWS Lambda function to interact with the SQS queue, complete the following steps. In addition to creating the function, you must create a new IAM policy and associate a new IAM role with the function. You need to know your AWS account ID and AWS region to complete this procedure.
From the IAM console, create a new IAM policy and input the following JSON:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "logs:CreateLogGroup", "Resource": "arn:aws:logs:<REGION>:<ACCOUNT_ID>:*" }, { "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:<REGION>:<ACCOUNT_ID>:log-group:/aws/lambda/<LAMBDA_FUNCTION_NAME>:*" ] }, { "Effect": "Allow", "Action": "sqs:SendMessage", "Resource": "arn:aws:sqs:<REGION>:<ACCOUNT_ID>:<SQS_QUEUE_NAME>" } ] }
where:
REGION
: Specify your AWS region.ACCOUNT_ID
: Specify your AWS account ID.LAMBDA_FUNCTION_NAME
: Specify the name of the AWS Lambda function, which you will create later. For example:privacera-postgres-cluster1-audits
SQS_QUEUE_NAME
: Specify the name of the AWS SQS Queue.
Specify a name for the IAM policy, such as
privacera-postgres-audits-lambda-execution-policy
, and then create the policy.From the IAM console, create a new IAM role and choose for the Use case the Lambda option.
Search for the IAM policy that you just created with a name that might be similar to
privacera-postgres-audits-lambda-execution-policy
and select it.Specify a Role name for the IAM policy, such as
privacera-postgres-audits-lambda-execution-role
, and then create the role.From the AWS Lambda console, create a new function and specify the following fields:
Function name: Specify a name for the function, such as
privacera-postgres-cluster1-audits
.Runtime: Select Node.js 12.x from the list.
Permissions: Select Use an existing role and choose the role created earlier in this procedure, such as
privacera-postgres-audits-lambda-execution-role
.
Add a trigger to the function you created in the previous step and select CloudWatch Logs from the list, and then specify the following values:
Log group: Select the log group path for your Amazon RDS database instance, such as
/aws/rds/cluster/database-1/postgresql
.Filter name: Specify
auditTrigger
.
In the Lambda source code editor, provide the following JavaScript code in the
index.js
file, which is open by default in the editor:var zlib = require('zlib'); // CloudWatch logs encoding var encoding = process.env.ENCODING || 'utf-8'; // default is utf-8 var awsRegion = process.env.REGION || 'us-east-1'; var sqsQueueURL = process.env.SQS_QUEUE_URL; var ignoreDatabase = process.env.IGNORE_DATABASE; var ignoreUsers = process.env.IGNORE_USERS; var ignoreDatabaseArray = ignoreDatabase.split(','); var ignoreUsersArray = ignoreUsers.split(','); // Import the AWS SDK const AWS = require('aws-sdk'); // Configure the region AWS.config.update({region: awsRegion}); exports.handler = function (event, context, callback) { var zippedInput = Buffer.from(event.awslogs.data, 'base64'); zlib.gunzip(zippedInput, function (e, buffer) { if (e) { callback(e); } var awslogsData = JSON.parse(buffer.toString(encoding)); // Create an SQS service object const sqs = new AWS.SQS({apiVersion: '2012-11-05'}); console.log(awslogsData); if (awslogsData.messageType === 'DATA_MESSAGE') { // Chunk log events before posting awslogsData.logEvents.forEach(function (log) { //// Remove any trailing \n console.log(log.message) // Checking if message falls under ignore users/database var sendToSQS = true; if(sendToSQS) { for(var i = 0; i < ignoreDatabaseArray.length; i++) { if(log.message.toLowerCase().indexOf("@" + ignoreDatabaseArray[i]) !== -1) { sendToSQS = false; break; } } } if(sendToSQS) { for(var i = 0; i < ignoreUsersArray.length; i++) { if(log.message.toLowerCase().indexOf(ignoreUsersArray[i] + "@") !== -1) { sendToSQS = false; break; } } } if(sendToSQS) { let sqsOrderData = { MessageBody: JSON.stringify(log), MessageDeduplicationId: log.id, MessageGroupId: "Audits", QueueUrl: sqsQueueURL }; // Send the order data to the SQS queue let sendSqsMessage = sqs.sendMessage(sqsOrderData).promise(); sendSqsMessage.then((data) => { console.log("Sent to SQS"); }).catch((err) => { console.log("Error in Sending to SQS = " + err); }); } }); } }); };
For the Lambda function, edit the environment variables and create the following environment variables:
REGION
: Specify your AWS region.SQS_QUEUE_URL
: Specify your AWS SQS queue URL.IGNORE_DATABASE
: Specifyprivacera_db
.IGNORE_USERS
: Specify your database administrative user, such asprivacera
.
Create an IAM role for an EC2 instance
To create an IAM role for the AWS EC2 instance where you installed Privacera so that Privacera can read the AWS SQS queue, complete the following steps:
From the IAM console, create a new IAM policy and input the following JSON:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "sqs:DeleteMessage", "sqs:GetQueueUrl", "sqs:ListDeadLetterSourceQueues", "sqs:ReceiveMessage", "sqs:GetQueueAttributes" ], "Resource": "<SQS_QUEUE_ARN>" }, { "Effect": "Allow", "Action": "sqs:ListQueues", "Resource": "*" } ] }
where:
SQS_QUEUE_ARN
: Specifies the AQS SQS Queue ARN identifier for the SQS Queue you created earlier.
Specify a name for the IAM policy, such as
postgres-audits-sqs-read-policy
, and create the policy.Attach the IAM policy to the AWS EC2 instance where you installed Privacera.