Automating with AWS Lambda

Run Commander in the AWS Cloud

About

Commander is a powerful tool that can solve many issues and provide valuable information about your Keeper Security environment. In addition to using Commander on a local desktop or server, Commander can be run in a cloud environment such as AWS for performing scheduled or on-demand requirements.

In this example, we will demonstrate how to use Commander with AWS Lambda to run user and usage reports on a scheduled basis.

Prerequisites

  • Commander

  • A Keeper user account with a Master Password login method (SSO login and MFA will not work without human interaction)

  • Access to AWS with permissions to create Lambda functions and associated layers, along with access to AWS CloudShell

Steps

Create a Lambda Layer With AWS CloudShell

Setup

Commander needs to be packaged on a machine that matches what it will run on in AWS Lambda. To do this, we can create the Commander package in CloudShell.

AWS CloudShell comes with Python 3.7 pre-installed, but just to be sure you are using a supported version of Python (versions 3.6 - 3.11) to build your Lambda Layer, go ahead and check the version of the installed Python interpreter in your CloudShell console.

$ python3 --version
Python 3.9.16

In case the Python version you're running is not supported by Commander, install one of the supported versions listed above prior to proceeding.

Building the Layer Content

For the next part of the process, we provide a convenient shell script for you to run from within your CloudShell environment, which will create the zip file that contains the keepercommander package needed for our Lambda Layer.

This script is intended to simplify and streamline much of the layer-content packaging process, both by encapsulating the various command calls that are standard for the process, and by abstracting away some build-process quirks specific to the keepercommander package and its dependencies. As such, we highly recommend using this approach over a more generic one, as it is much less likely to be error-prone.

View Script
package_layer_content.sh
#!/usr/bin/env bash

#   To create a `keepercommander` dependency layer for your AWS Lambda function :
#   1. Upload this script to any folder in your CloudShell environment.
#   2. (Optional) Upload your project's `requirements.txt` file  to the same folder.
#   3. In that folder, run
#             source ./package_layer_content.sh
#   4. There should now be a file named `commander-layer.zip` that can be uploaded
#     to your S3 bucket, where it can then be used to create a new Lambda layer

MAX_LIB_SIZE=262144000
LAYER_FILENAME='commander-layer.zip'
LAYER_PATH=$(pwd)/$LAYER_FILENAME
LIB_DIR='python'
VENV='commander-venv'
OTHER_DEPS='requirements.txt'

# Clean up previous artifacts
test -f $LAYER_FILENAME && rm $LAYER_FILENAME
test -d $LIB_DIR && rm -rf $LIB_DIR
test -d $VENV && rm -rf $VENV

# Create package folder to zip
mkdir $LIB_DIR

# Create and run virtual environment
python -m venv $VENV
source ./$VENV/bin/activate

# Install dependencies and package
pip install cryptography --platform manylinux2014_x86_64 --only-binary=:all: -t $LIB_DIR
pip install keepercommander -t $LIB_DIR

if test -f $OTHER_DEPS; then
  pip install -r $OTHER_DEPS -t $LIB_DIR
fi

deactivate

# Check uncompressed library size
LIB_SIZE=$(du -sb $LIB_DIR | cut -f 1)
LIB_SIZE_MB=$(du -sm $LIB_DIR | cut -f 1)

if [ "$LIB_SIZE" -ge $MAX_LIB_SIZE ]; then
  echo "*****************************************************************************************************************"
  echo 'Operation was aborted'
  echo "The resulting layer has too many dependencies and its size ($LIB_SIZE_MB MB) exceeds the maximum allowed (~262 MB)."
  echo 'Try breaking up your dependencies into smaller groups and package them as separate layers.'
  echo "*****************************************************************************************************************"
else
  zip -r $LAYER_FILENAME $LIB_DIR
  echo "***************************************************************************"
  echo "***************************************************************************"
  echo 'Lambda layer file has been created'
  printf "To download, copy the following file path: %s\n%s\n$LAYER_PATH%s\n%s\n"
  echo 'and click on "Actions" in the upper-right corner of your CloudShell console'
  echo "***************************************************************************"
fi

# Clean-up
rm -rf $LIB_DIR
rm -rf $VENV
Bash script for packaging keepercommander Lambda Layer content

To use the script provided above, perform the following steps after downloading the file:

  1. Upload the script to any folder (preferably an empty one) in your CloudShell environment

    • (Optional) If your project has a requirements.txt file containing a list of its dependencies, you can upload it to the same folder to include those dependencies in the resulting layer in addition to the keepercommander package.

  2. In that same folder, run the following command in the terminal:

source ./package_layer_content.sh
  1. You should now have a zip file (commander-layer.zip) in your current folder, which represents the content of your Lambda Layer.

There is a size limit on packaged Lambda layer content (even if stored in S3). So if you try to include additional dependencies by providing the requirements.txt file for the script above, and it results in the total content size exceeding this limit, the resulting zip file will be unusable. Hence, no layer content will be output when the script detects this scenario, and a helpful message will be shown to the user instead.

A relatively simple solution is to break up your dependencies into smaller groups to be packaged into corresponding separate layers. You can remove some (or all) dependencies from requirements.txt and run the script again. Any dependencies excluded from the resulting package can then be packaged separately into another layer using the standard packaging process.

Creating / Updating a Layer from the Content Zip File

Because the resulting zip file is going to be bigger than 50MB (the maximum allowed to be uploaded directly to a Lambda Layer), we'll have to first upload it to an AWS S3 Bucket, and then link the resulting S3 item to our Lambda Layer.

There are multiple ways to complete the remaining steps just mentioned, and if you prefer a GUI-based route, using the AWS Console is a perfectly valid option at this point. But since we're already in our CloudShell environment, using its built-in AWS CLI command-line tool seems like the simplest and most direct way forward, so that is the method we'll show here.

  1. First, we need to upload our zip file to AWS S3. If you didn't previously create a S3 bucket for this task, you can do so by running the following command in CloudShell:

$ aws s3 mb <bucket-name>

where <bucket-name> is required to be a globally unique name.

  1. Upload the newly-packaged zip file from CloudShell to your S3 bucket

$ aws s3 cp ./commander-layer.zip 's3://<bucket-name>'
  1. Publish the Lambda layer with the uploaded content

$ aws lambda publish-layer-version --layer-name <layer-name> \
--description <layer-description> \
--content "S3Bucket=<bucket-name>,S3Key=commander-layer.zip" \
--compatible-runtimes python<your-version>

Create a Lambda

In AWS Lambda, use the Lambda editor to write a Python function.

The lambda_handler function is triggered by Lambda when it is processed.

See a complete example of a Commander Lambda function below:

#  _  __
# | |/ /___ ___ _ __  ___ _ _ ®
# | ' </ -_) -_) '_ \/ -_) '_|
# |_|\_\___\___| .__/\___|_|
#              |_|
#
# Keeper Commander
# Copyright 2024 Keeper Security Inc.
# Contact: [email protected]

import os

# Without mounted volumes, Lambda can only write to /tmp. 
# The following environment variables are needed to make sure the Python import cache 
# is written to /tmp and not the user folder.
os.environ['HOME'] = '/tmp'
os.environ['TMPDIR'] = '/tmp'
os.environ['TEMP'] = '/tmp'

from keepercommander import api
from keepercommander.__main__ import get_params_from_config

# By default, Keeper Commander will attempt to create a .keeper directory 
# in the user folder to store the JSON configuration.
# In this case we will create a .keeper directory in /tmp 
# to store the JSON configuration (using get_params_from_config()).
keeper_tmp = '/tmp/.keeper'
os.makedirs(keeper_tmp, exist_ok=True)

# ------------------------------------------------------
# Keeper initialization function
# ------------------------------------------------------
def get_params():
    # Change the default JSON configuration location to /tmp
    params = get_params_from_config(keeper_tmp + '/config.json') 

    # Set username and password for Keeper Commander login
    #params.config = {'sso_master_password': True} # Force Master-Password login for SSO users
    #params.server = os.environ.get('KEEPER_SERVER') # https://keepersecurity.com
    params.user = os.environ.get('KEEPER_USER')
    params.password = os.environ.get('KEEPER_PASSWORD')
    return params

# ------------------------------------------------------
# Keeper JSON report function
# ------------------------------------------------------
def get_keeper_report(params, kwargs):
    from keepercommander.commands.aram import AuditReportCommand
    from json import loads
    
    report_class = AuditReportCommand()
    report = report_class.execute(params, **kwargs)
    return loads(report)
    
# ------------------------------------------------------
# Keeper CLI function
# ------------------------------------------------------
def run_keeper_cli(params, command):
    from keepercommander import cli
    
    cli.do_command(params, command)
    # No return statement as this function runs the CLI command 
    # without returning anything in Python
    
# ------------------------------------------------------
# Lambda handler
# ------------------------------------------------------
def lambda_handler(event, context):
    # Initialize Keeper Commander params
    params = get_params()

    # Keeper login and sync
    api.login(params)
    api.sync_down(params)
    # Enterprise sync (for enterprise commands)
    api.query_enterprise(params)

    run_keeper_cli(
        params, 
        'device-approve -a'
    )
    
    run_keeper_cli(
        params, 
        'action-report --target locked --apply-action delete --dry-run'
    )

    return get_keeper_report(
        params,
        {
            'report_type':'raw', 
            'format':'json',
            'limit':100,
            'event_type':['login']
        }
    )

The program is made up of several parts:

The lambda_handler() function

This function is called when the lambda is triggered, all other functions should be called from this one.

The get_params() function

Using your username and password from environment variables (and a server if not US), this function initializes the Keeper login configuration. This configuration will materialize as a config.json file when calling api.login(). If your Lambda handler doesn't have mounted storage, you must ensure this file is written to /tmp, which is what the get_params_from_config() function achieves.

Commander Functions

Once logged in and synced up, you can run functions and create classes from the Commander SDK. The program above includes two functions - get_keeper_report(), which returns a JSON report (equivalent to audit-report command), and run_keeper_cli(), which simply executes CLI commands (without returning data in Python).

Configure Lambda

Set Timeout

In the general configuration section of the Lambda configuration, it is recommended to change the timeout value. Some Commander functions take time, so if your script takes longer to complete than this number, the Lambda will automatically end without finishing.

You can set this to any number you are comfortable with. A value of 300 seconds (5 minutes) should be more than enough time for most processes.

Select Layer

In the Lambda editor, select the layer that you created above to include the Commander package in your Lambda build.

Create run Schedule

Create an EventCloud trigger to trigger the Lambda and set it to trigger at a cadence of your choice (for example once per day, or once every 30 days).

AWS can also be configured to trigger Lambda from a number of other sources including email and SMS triggers. See Amazon's documentation on invoking Lambda for more options:

Next Steps

We encourage you to experiment with other Commander functionality, Lambda invocation methods, and other AWS services (such as SNS for utilizing various methods for push notifications -- including SMS messages) to bring automated value to your Keeper processes.

For some examples of using the Commander SDK code, see the example scripts in the Commander GitHub repo:

To learn more about Commander's various methods, see the Command Reference section.

Command Reference

Last updated

Was this helpful?