Skip to content

AWS CLI#

Enable AWS CLI#

  1. In the Privacera Portal, click LaunchPad from the left menu.

  2. Under the AWS Services section, click to open the AWS CLI dialog.  This dialog provides the means to download an AWS CLI setup script specific to your installation. It also provides a set of usage instructions.

  3. In AWS CLI, under  Configure Script, click  Download Script to save the script on your local machine.  If you will be running AWS CLI on another system such as a 'jump server' copy it to that host.

    1. Alternatively, use 'wget' to pull this script down to your execution platform, as shown below. Substitute your installation's Privacera Platform host domain name or IPv4 address for "<PRIVACERA_PORTAL_HOST>".

      wget http://<PRIVACERA_PORTAL_HOST>:6868/api/cam/download/script -O privacera_aws.sh
      # USE THE "--no-check-certificate" option for HTTPS - and remove the # below
      # wget --no-check-certificate https://<PRIVACERA_PORTAL_HOST>:6868/api/cam/download/script -O privacera_aws.sh
      
    2. Copy the downloaded script to home directory.

      cp privacera_aws.sh ~/
      cd ~/
      
    3. Set this file to be executable: 

      chmod a+x . ~/privacera_aws.sh
      
  4. Under the AWS Cli Generate Token  section, first, generate a platform token. 

    Note

    All the commands should be run with a space between the dot (.) and the script name (~/privacera_aws.sh).

    1. Run the following command:

      . ~/privacera_aws.sh --config-token
      
    2. Select/check Never Expired to generate a token that does not expire. Click Generate.

  5. Enable the Proxy or the endpoint and run one of the two commands shown below.  

    . ~/privacera_aws.sh --enable-proxy
    

    or:

    . ~/privacera_aws.sh --enable-endpoint
    
  6. Under the Check Status section,  run the command below.

    . ~/privacera_aws.sh --status
    
  7. To disable both the proxy and the endpoint, under the AWS Access section, run the commands shown below.

    . ~/privacera_aws.sh --disable-proxy
    . ~/privacera_aws.sh --disable-endpoint
    

AWS CLI Examples#

Get Database

aws glue get-databases --region ca-central-1
aws glue get-databases --region us-west-2

Get Catalog Import Status

aws glue get-catalog-import-status --region us-west-2

Create Database

aws glue create-database --cli-input-json '{"DatabaseInput":{"CreateTableDefaultPermissions": \[{"Permissions": \["ALL"\],"Principal": {"DataLakePrincipalIdentifier": "IAM\_ALLOWED\_PRINCIPALS"}}\],"Name":"qa\_test","LocationUri": "s3://daffodil-us-west-2/privacera/hive\_warehouse/qa\_test.db"}}' --region us-west-2 --output json

Create Table

aws glue create-table --database-name qa\_test --table-input file://tb1.json --region us-west-2

Note

A tb1.json file should be created by the user on the location where the create table command will be executed.  Sample json file

{
"    ""Name":"tb1",
"    ""Retention":0,
"    ""StorageDescriptor":{
    "        ""Columns":"\\"[
        "          "{
            "            ""Name":"CC",
            "            ""Type":"string""        "
        },
        "        "{
            "Name":"FST\\_NM",
            "            ""Type":"string""        "
        },
        "        "{
            "            ""Name":"LST\\_NM",
            "            ""Type":"string""        "
        },
        "        "{
            "            ""Name":"SOC\\_SEC\\_NBR",
            "            ""Type":"string""        "
        }"        \\"
    ],
    "        ""Location":"s3://daffodil-us-west-2/data/sample\\_parquet/",
    "        ""InputFormat":"org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat",
    "        ""OutputFormat":"org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat",
    "        ""Compressed":false,
    "        ""NumberOfBuckets":0,
    "        ""SerdeInfo":{
        "            ""SerializationLibrary":"org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe",
        "            ""Parameters":{
            "                ""serialization.format":"1""            "
        }"        "
    },
    "        ""SortColumns":"\\"[
        "\\"
    ],
    "        ""StoredAsSubDirectories":"false

    "
},
"    ""TableType":"EXTERNAL\\_TABLE",
" ""Parameters":{
    "        ""classification":"parquet""    "
}
}

Delete Table

aws glue delete-table --database-name qa\_db --name test --region us-west-2

aws glue delete-table --database-name qa\_db --name test --region us-east-1

aws glue delete-table --database-name qa\_db --name test --region ca-central-1

aws glue delete-table --database-name qa\_db --name test --region ap-south-1

Delete Database

aws glue delete-database --name qa\_test --region us-west-2

aws glue delete-database --name qa\_test --region us-east-1

aws glue delete-database --name qa\_test --region ap-south-1

aws glue delete-database --name qa\_test --region ca-central-1

AWS Kinesis - CLI Examples#

CreateStream:

aws kinesis create-stream --stream-name SalesDataStream --shard-count 1 --region us-west-2

Put Record:

aws kinesis put-records --stream-name SalesDataStream --records Data=name,PartitionKey=partitionkey1 Data=sales\_amount,PartitionKey=partitionkey2 --region us-west-2

Read Record:

aws kinesis list-shards --stream-name SalesDataStream --region us-west-2


#Copy Shard id from above command output.
aws kinesis get-shard-iterator --stream-name SalesDataStream --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --region us-west-2

#Copy Shard Iterator from above command output.

aws kinesis get-records --shard-iterator AAAAAAAAAAG13t9nwsYft2p0IDF8qJOVh/Dc69RXm5v+QEqK4AW0CUlu7YmFChiV5YtyMzqFvourqhgHdANPxa7rjduAiIOUUwgaBNjJuc67SYeqZQLMgLosfQBiF6BeRQ+WNzRkssCZJx7j3/W53kpH70GJZym+Qf73bvepFWpmflYCAlRuFUjpJ/soWUmO+2Q/R1rJCdFuyl3YvGYJYmBnuzzfDoR6cnPLI0sjycI3lDJnlzrC+A==

#Copy Data from above command output.

#We Received the encoded Data, Copy Data and Use it in Below Command.

echo <data> | base64 --decode

Kinesis Firehose

CreateDelivery Stream:

aws firehose create-delivery-stream --delivery-stream-name SalesDeliveryStream --delivery-stream-type DirectPut --extended-s3-destination-configuration "BucketARN=arn:aws:s3:::daffodil-data,RoleARN=arn:aws:iam::857494200836:role/privacera\_user\_role" --region us-west-2

Put Record:

aws firehose put-record --delivery-stream-name SalesDeliveryStream --record="{\\"Data\\":\\"Sales\_amount\\"}" --region us-west-2

Describe Delivery Stream:

aws firehose describe-delivery-stream --delivery-stream-name SalesDeliveryStream --region us-west-2

AWS DynamoDB CLI Examples#

create-table

aws dynamodb create-table \
  --attribute-definitions AttributeName=id,AttributeType=N AttributeName=country,AttributeType=S \
  --table-name SalesData --key-schema AttributeName=id,KeyType=HASH AttributeName=country,KeyType=RANGE \ 
  --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1 --region us-west-2 \
  --output json

put-item

aws dynamodb put-item --table-name SalesData \
  --item '{"id": {"N": "3"},"country": {"S": "UK"},\
  "region": {"S": "EUl"},"city": {"S": "Rogerville"},"name": {"S": "Nigel"},\
  "sales_amount": {"S": "87567.74"}}'  \
  --region us-west-2

scan

aws dynamodb scan --table-name SalesData --region us-west-2

Last update: August 24, 2021