Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Configuration settings and features on Automator
The settings in this document control the features and security of the Automator service.
automator_debug
Env Variable: AUTOMATOR_DEBUG
Description: This is an easier way to turn on/off debug logging in Automator.
automator_config_key
Env Variable: AUTOMATOR_CONFIG_KEY
Default: Empty
Description: Base64-url-encoded 256-bit AES key. This is normally only used as an environment variable. (since v3.1.0). This setting is required to load the encrypted configuration from the Keeper cloud if there is no shared /usr/mybin/config file storage between container instances.
automator_host
Env Variable: AUTOMATOR_HOST
Default: localhost
Description: The hostname or IP address where the Automator service is listening locally. If SSL is enabled (ssl_mode
parameter), the automator_host value needs to match the SSL certificate subject name. The setting disable_sni_check
can be set to false
if the subject name does not match.
If the service is running on a machine with multiple network IPs, this setting will bind the Automator service to the specified IP.
automator_port
Env Variable: AUTOMATOR_PORT
Default: 8089
Description: The port where the Automator listens. If running in Docker, use the default 8089.
disable_sni_check
Env Variable: DISABLE_SNI_CHECK
Default: false
Description: Disable the SNI check against the certificate subject name, if SSL is being used.
email_domains
Env Variable: EMAIL_DOMAINS
Default: null
Description: A comma-separated list of user email domains for which Automator will approve devices or teams. Example: "example.com, test.com, mydomain.com". This depends on the filter_by_email_domains setting to be enabled as well.
filter_by_email_domains
Env Variable: FILTER_BY_EMAIL_DOMAINS
Description: If true, Keeper will consult the email_domains list. If false, the email_domains list will be ignored.
enabled
Env Variable: N/A
Default: false
Description: This determines if Automator is enabled or disabled.
enable_rate_limits
Env Variable: ENABLE_RATE_LIMITS
Default: false
Description: If true, Automator will rate limit incoming calls per the following schedule:
approve_device
: 100 calls/minute with bursts to 200
approve_teams_for_user
: 100 calls/minute with bursts to 200
full_reset
: 4 per minute, with bursts to 6
health
: 4 per minute
initialize
: 4 per minute, with bursts to 6
setup
: 4 per minute, with bursts to 6
status
: 5 per minute
ip_allow
and ip_deny
Env Variable: IP_ALLOW
and IP_DENY
Default: ""
Description: This restriction allows users to be eligible for automatic approval. Users accepted by the IP restriction filter still need to be approved in the usual way by Automator. Users denied by the IP restriction filter will not be automatically approved.
If "ip_allow" is empty, all IP addresses are allowed except those listed in the "ip_deny" list. If used, devices at IP addresses outside the allowed range are not approved by Automator. The values are a comma-separated list of single IP addresses or IP ranges. The "ip_allow" list is checked first, then the "ip_deny" list is checked.
Example 1: ip_allow=
ip_deny=
Example 2:
ip_allow=10.10.1.1-10.10.1.255, 172.58.31.3, 175.200.1.10-175.200.1.20
ip_deny=10.10.1.25
name
Env Variable: N/A
Default: Automator-1
Description: The name of the Automator. It should be unique inside an Enterprise. An automator can be referenced by its name or by its ID.
persist_state
Env Variable: N/A
Default: true
Description: If true, the Automator state will be preserved across shutdowns. Leave this on.
skill
Env Variable: N/A
Default: device_approval
Description: “device_approval” means device approval. “team_for_user_approval” means team approvals. An Automator can have multiple skills. “device_approval” is the default.
ssl_certificate
Env Variable: SSL_CERTIFICATE
Default: null
Description: A Base64-encoded string containing the contents of the PFX file used for the SSL certificate. For example, on UNIX base64 -i my-certificate.pfx
will produce the required value.
Using this environment variable will override the ssl_certificate_filename
setting.
ssl_certificate_file_password
Env Variable: SSL_CERTIFICATE_PASSWORD
Default: ""
Description: The password on the SSL file. If used, the key password should be empty, or should be the same. The library we use does not allow different passwords.
ssl_certificate_key_password
Env Variable: SSL_CERTIFICATE_KEY_PASSWORD
Default: ""
Description: The password on the private key inside the SSL file. This should be empty or the same as the file password.
ssl_mode
Env Variable: SSL_MODE
Default: certificate
Description: The method of communication on the Automator service. This can be: certificate
, self_signed
, or none
. If none
, the Automator server will use HTTP instead of HTTPS. This may be acceptable when Automator is hosted under a load balancer that decrypts SSL traffic.
url
Env Variable: N/A
Default: ""
Description: The URL where the Automator can be contacted.
Keeper Push is a method of SSO device approval using existing devices
Users can approve their additional devices by using a previously approved device. For example, if you are logged into your web vault on your computer already, and logging into your phone app for the first time, you will get a device approval prompt on your web vault with the mobile device's information which you can approve or deny. Device Approvals perform an encryption key exchange that allow a user on a new device to decrypt their vault.
Keeper Push is a method of approval that the user handles for themselves. Selecting "Keeper Push" will send a notification to the user's approved devices. For mobile and desktop apps, the push will show as a pop-up, and the user can simply accept the device approval.
Here's an example of Keeper Push approval using mobile as the approver device:
(1) Select Keeper Push
(2) User waits for the push approval to appear on the device in which they are already logged in.
(3) User must be already logged into a different, previously approved device in order to receive the notification.
Ingres configuration for Keeper Automator
Keeper Automator can be deployed many different ways - on prem, cloud or serverless.
In your firewall inbound traffic rules, set one of following rulesets:
For US Data Center Customers:
Inbound TCP port 443 from 54.208.20.102/32
Inbound TCP port 443 from 34.203.159.189/32
US / GovCloud Data Center Customers:
Inbound TCP port 443 from 18.252.135.74/32
Inbound TCP port 443 from 18.253.212.59/32
For EU / Dublin Data Center Customers:
Inbound TCP port 443 from 52.210.163.45/32
Inbound TCP port 443 from 54.246.185.95/32
For AU / Sydney Data Center Customers:
Inbound TCP port 443 from 3.106.40.41/32
Inbound TCP port 443 from 54.206.208.132/32
For CA / Canada Data Center Customers:
Inbound TCP port 443 from 35.182.216.11/32
Inbound TCP port 443 from 15.223.136.134/32
For JP / Tokyo Data Center Customers:
Inbound TCP port 443 from 54.150.11.204/32
Inbound TCP port 443 from 52.68.53.105/32
In addition, you may want to allow traffic from your office network (for the purpose of testing and health checks).
Make sure to create a rule for each IP address listed based on your Keeper geographic data center region.
SSO Cloud device approval system
Device Approvals are a required component of the SSO Connect Cloud platform. Approvals can be performed by users, admins, or automatically using the Keeper Automator service.
For customers who authenticate with Keeper SSO Connect Cloud, device approval performs a key transfer, in which the user's encrypted data key is delivered to the device, which is then decrypted locally using their elliptic curve private key.
Keeper SSO Connect Cloud provides Zero-Knowledge encryption while retaining a seamless login experience with any SAML 2.0 identity provider.
When a user attempts to login on a device that has never been used prior, an Elliptic Curve private/public key pair is generated on the new device. After the user authenticates successfully from their identity provider, a key exchange must take place in order for the user to decrypt the vault on their new device. We call this "Device Approval".
Using Guest, Private or Incognito mode browser modes will identify itself to keeper as a new device each time it is launched, and therefore will require a new device approval.
To preserve Zero Knowledge and ensure that Keeper's servers do not have access to any encryption keys, we developed a Push-based approval system that can be performed by the user or the designated Administrator. Keeper also allows customer to host a service which performs the device approvals and key exchange automatically, without any user interaction.
Device approval methods include the following:
Keeper Push (using push notifications) to existing user devices
Admin Approval via the Keeper Admin Console
Automatic approval via Keeper Automator service (preferred)
Semi-automated Admin Approval via Commander CLI
Deploy Keeper Automator to Azure Container Instances using the Azure App Gateway Service
This guide provides step-by-step instructions to publish Keeper Automator in a secure VNet with Azure Application Gateway. This method is more advanced than the Azure Container App configuration. If you don't require the use of Azure App Gateway or encrypted SAML requests, it would be best to use the Azure Container App method.
For this method, make sure you already have your SSL Certificate. If not, please follow the steps in the Custom SSL Certificate page.
(1) Open the Azure Cloud Shell
Login to portal.azure.com and click on the Cloud Shell icon.
(2) Create a resource group in your preferred region
If the resource group in Azure does not exist yet, create it. The example here uses the eastus
region, but make sure to use your region.
If the storage account does not exist yet, create it and ensure to use the correct region (useast) and the name of the resource group above. Note: The name you choose (to replace keeperautomatorstorage) needs to be globally unique to azure.
If the file share does not exist yet, create it.
List the current shares:
(5) Create a Virtual Network (VNet) and one Subnet for the container
(6) Update the Virtual Network with the Service Endpoints
To find a storage key for the account, use the command below. Replace the name of the storage account with your specific name.
Copy the key1 value which will look like this:
(8) Retrieve Subnet ID
Run the below command to find the Subnet ID:
Copy the full subnet ID path that ends with _subnet. It will look like this:
In your local filesystem, create a folder such as automator
.
In that folder, create a file called automator.yml with your favorite editor that has the below contents.
Note there are several places where the string value needs to be changed based on your configuration in the prior steps.
subnet ID needs to match the full path of the ID retrieved from step 8
storageAccountName needs to match the value from Step 3
storageAccountKey needs to match the value from Step 7
(10) Upload the SSL Certificate and SSL Password Files
From the Azure interface, navigate to the Resource Group > Storage Account > File Share > into the Automator file share created. From here, upload the automator.yml file, SSL certificate file and SSL certificate password file.
Make sure your files are named automator.yml ssl-certificate.pfx and ssl-certificate-password.txt
(11) Copy the 3 files to your local CLI workspace
(12) Create the Container Instance
Create the container using the configuration in automator.yml
.
Obtain the Internal IP of the container in the response.
For later, set a variable of this IP, for example:
(13) Create Application Gateway Subnet
(14) Create an Application Gateway
Ensure that the SSL certificate password is replaced in the XXXXXX section.
(15) Locate the Public IP
In the Azure portal interface, navigate to the Resource Group > App Gateway and make note of the public IP address.
(16) Route DNS
Ensure that the DNS for your Automator service (e.g. automator.company.com) is pointed to the IP address generated in Step 15 by the Azure Container service.
The DNS name must match the SSL certificate subject name or else requests will fail.
(17) Create a Health Probe
A health probe will inform the App Gateway that the Automator service is running. From the Azure portal interface, open the Automator App Gateway and then click on "Health probes" from the left menu.
Now create a new Health Probe with the settings as seen in the below screenshot. Make sure to replace the Host with the FQDN set up in Step 16.
Click on "Test" and then add the probe. The test will succeed if the container IP is properly addressed to the host name.
(18) Configure the Web Application Firewall
From the Azure portal interface, open the Automator App Gateway and then click on "Web application firewall" on the left side. Enable the WAF V2 and configure the screen exactly as seen below.
Click on the "Rules" tab then select the Rule set to "OWASP 3.2" and then click on "Enabled" and "Save". This is a critical step.
The final step is to configure Automator using Keeper Commander.
(19) Install Keeper Commander
At this point, the service is running but it is not able to communicate with Keeper yet.
On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here:
Installing Keeper Commander
After Commander is opened, login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
(20) Initialize with Commander
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. Edit the URL with the FQDN you selected.
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For automated health checks, you can use the below URL:
https://<server>/health
Example curl
command:
Note this URL will not open in a web browser.
When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
That's it, your Automator service should now be running.
In the Azure Portal in the "Container Instances" system, you can see the container running. You can also connect to the container (using /bin/sh) and view running logs.
Based on this configuration, it is possible that restarting the container will assign a new IP address from the /24 subnet. To quickly locate the new IP and update the Application Gateway backend pool with the correct IP, the below script can be run from the Azure CLI.
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window or guest mode window and go to to the Keeper Web Vault and login with SSO Cloud. If you are not be prompted for device approval, the automator is functioning properly.
Running Keeper Automator with the AWS ECS (Fargate) service
This example demonstrates how to launch the Keeper Automator service in Amazon ECS in the most simple way, with as few dependencies as needed.
Requirements:
A managed SSL certificate via the AWS
Generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:
Save this value for the environmental variables set in the task definition.
If your VPC does not exist, a basic VPC with multiple subnets, a route table and internet gateway must be set up. In this example, there are 3 subnets in the VPC with an internet gateway as seen in the resource map below:
If you would like to capture logs (recommended), Go to CloudWatch > Create log group
Name the log group "automator-logs".
Go to IAM > Create role
Select AWS service
Then search for Elastic Container Service and select it.
Select "Elastic Container Service Task" and click Next
Add the "AmazonECSTaskExecution" policy to the role and click Next
Assign the name "ECSTaskWritetoLogs" and then create the role.
Make note of the ARN for this Role, as it will be used in the next steps.
In this example, it is arn:aws:iam::373699066757:role/ECSTaskWritetoLogs
Go to EC2 > Security Groups and click on "Create security group"
Depending on what region your Keeper tenant is hosted, you need to create inbound rules that allow https port 443 from the Keeper cloud. The list of IPs for each tenant location is located below. In the example below, this is the US data center.
We also recommend adding your workstation's external IP address for testing and troubleshooting.
Assign a name like "MyAutomatorService" and then click "Create".
Remember to add your own IP which you can find from this URL:
After saving the security group, edit the inbound rules again. This time, make the following additions:
Add HTTP port 8089 with the Source set to the security group. This allows traffic from the ALB to the container inside the network and for processing health checks.
Navigate to the Amazon Elastic Container Service.
Select "Create cluster" and assign the cluster name and VPC. In this example we are using the default "AWS Fargate (serverless)" infrastructure option.
The Default namespace can be called "automator"
The "Infrastructure" is set to AWS Fargate (serverless)
Click Create
In your favorite text editor, copy the below JSON task definition file and save it.
Important: Make the following changes to the JSON file:
Change the XXX (REPLACE THIS) XXX on line 24 to the secret key created in Step 1 above.
Replace lines 37-39 with the name and location of the log group from Step 3
Change the XXX on line 44 for the role ID as specific to your AWS role (from Step 4, e.g. 373699066757)
Next, go to Elastic Container Service > Task definitions > Create Task from JSON
Remove the existing JSON and copy-paste the modified JSON file into the box, then click Create.
This task definition can be modified according to your instance CPU/Memory requirements.
In order for an application load balancer in AWS to serve requests for Automator, the SSL certificate must be managed by the AWS Certificate Manager. You can either import or create a certificate that is managed by AWS.
From AWS console, open the "Certificate Manager"
Click on Request
Request a public certificate and click Next
Enter the domain name for the automator service, e.g. automator.lurey.com
Select your preferred validation method and key algorithm
Click Request
Click on the certificate request from the list of certs
If you use Route53 to manage the domain, you can click on the certificate and then select "Create Records in Route53" to instantly validate the domain and create the certificate. If you use a different DNS provider, you need to create the CNAME record as indicated on the screen.
Once create the CNAME record, the domain will validate within a few minutes.
This certificate will be referenced in Step 11 when creating the application load balancer.
Go to EC2 > Target Groups and click Create target group
Select "IP Addresses" as the target type
Enter the Target group name of "automatortargetgroup" or whatever you prefer
Select HTTP Protocol with Port 8089
Select the VPC which contains the ECS cluster
Select HTTP1
Under Health checks, select the Health check protocol "HTTP"
Type /health
as the Health check path
Expand the "Advanced health check settings"
Select Override and then enter port 8089
Click Next
Don't select any targets yet, just click Create target group
Go to EC2 > Load balancers > Create load balancer
Select Application Load Balancer > Create
Assign name such as "automatornalb" or whatever you prefer
Scheme is "Internet-facing"
IP address type: IPv4
In the Network Mapping section, select the VPC and the subnets which will host the ECS service.
In the Security groups, select "MyAutomatorService" as created in Step 4.
In the Listeners and routing section, select HTTPS port 443 and in the target group select the Target group as created in the prior step (automatortargetgroup).
In the Secure listener settings, select the SSL certificate "from ACM" that was uploaded to the AWS Certificate Manager in Step 9.
Click Create load balancer
Go to Elastic Container Service > Task definitions > Select the task created in Step 8.
From this Task definition, click on Deploy > Create Service
Select Existing cluster of "automator"
Assign Service name of "automatorservice" or whatever name you prefer
For the number of Desired tasks, set this to 1 for now. After configuration is complete, you can increase to the number of tasks you would like to have running.
Under Networking, select the VPC, subnets and replace the existing security group with the ECS security group created in Step 4. In this case, it is called "MyAutomatorService".
For Public IP, turn this ON.
Under Load balancing, select Load balancer type "Application Load Balancer"
Use an existing load balancer and select "automatoralb" created in Step 11.
Use an existing listener, and select the 443:HTTPS listener
Use an existing target group, and select the Target Group from Step 10
Set the Health check path to "/health"
Set the Health check protocol to "HTTP"
Click Create
After a few minutes, the service should start up.
Assuming that the DNS name is hosted and managed by Route53:
Go to Route53 > Create or Edit record
Create an A-record
Set as "Alias"
Route traffic to "Alias to Application and Classic Load Balancer"
Select AWS Region
Select the "automatoralb" Application Load Balancer
Select "Simple Routing"
Select "Save"
The next step is to configure Automator using Keeper Commander while only having one task running.
At this point, the service is running but it is not able to communicate with Keeper yet.
On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here:
Installing Keeper Commander
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For automated health checks, you can use the below URL:
https://<server>/health
Example curl
command:
In this example setup, the load balancer will be forwarding /health checks to the target instances over HTTP port 8089.
Now that Keeper Automator is deployed with a single task running, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
Assuming that the approval worked, you can now increase the number of tasks running.
The Keeper Automator service can run quite well on a single container without any issue due to the low number of requests that it processes. However if you would like to have multiple containers running, please follow the below steps:
Click on the "automatorservice" from the ECS services screen
Click on "Update service"
Select the checkbox on "Force new deployment"
Ensure that the latest task revision is selected
Set the Desired Tasks to the number of containers you would like running
Click "Update"
After a few minutes, the new containers will deploy. Wait until all containers are active.
Launch Keeper Commander (or it might still be open)
For every container, the automator setup, automator init and automator enable must be executed.
For example, if there are 3 containers running:
The Automator logs can be searched and monitored in the "Logs" tab of the ECS service, or in CloudWatch.
Admins can approve end-user SSO Cloud devices
Selecting "Admin Approval" will send the request to the Keeper Administrator with the "Approve Devices" permission. The Admin can perform the approval through the Admin Console "Approval Queue" screen or by being logged into the Admin Console at the time of the request.
(1) User selects "Admin Approval"
(2) User waits for approval or comes back later
(3) Administrator logs into the Admin Console and visits the Approval Queue
(4) Admin reviews the information and approves the device
Select the device to approve and then click "Approve". If the user is waiting, they will be instantly permitted to login. Otherwise, the user can login at a later time without any approval (as long as they don't clear out their web browser cache or reset the app).
A special role permission called "Approve Devices" provides a Keeper Administrator the ability to approve a device.
(1) Go to Roles within the root node or the SSO node
(2) Select the gear icon to control the Admin Permissions for the selected role.
(3) Assign "Approve Devices" permission
Now, any user added to this role is able to login to the Admin Console to perform device approvals.
As with any administrative permission, ensure least privilege
Running Keeper Automator with the AWS ECS (Fargate) service and Keeper Secrets Manager for secret storage
This example demonstrates how to launch the Keeper Automator service in Amazon ECS with Fargate, while also demonstrating the use of Keeper Secrets Manager for retrieving the secret configuration for the published Docker container.
Since this deployment requires the use of Keeper Secrets Manager, this section reviews the steps needed to set up your Keeper vault and the SSL certificate for publishing the Automator service.
Create an SSL Certificate as described from this page
When this step is completed, you will have two files: ssl-certificate.pfx
and ssl-certificate-password.txt
Create a Shared Folder in your vault. This folder will not be shared to anyone except the secrets manager application.
Create a record in the Shared Folder, and make note of the Record UID. Upload the SSL certificate and SSL certificate password files to a Keeper record in the shared folder.
Upload a new file called keeper.properties which contains the following content:
The notable line here is the disable_sni_check=true
which is necessary when running the Automator service under a managed load balancer.
Your shared folder and record should look something like this:
Create a Keeper Secrets Manager ("KSM") application in your vault. If you are not familiar with secrets manager, follow this guide. The name of this application is "Automator" but the name does not matter.
Edit the Shared Folder and add the Automator application to this folder.
Open the secrets manager application, click on "Devices" tab and click "Add Device". Select a base64 configuration. Download and save this configuration for use in the ECS task definition.
If your VPC does not exist, a basic VPC with multiple subnets, a route table and internet gateway must be set up. In this example, there are 3 subnets in the VPC with an internet gateway as seen in the resource map below:
Go to CloudWatch > Create log group
Name the log group "automator-logs".
Go to IAM > Create role
Select AWS service
Then search for Elastic Container Service and select it.
Select "Elastic Container Service Task" and click Next
Add the "AmazonECSTaskExecution" policy to the role and click Next
Assign the name "ECSTaskWritetoLogs" and then create the role.
Make note of the ARN for this Role, as it will be used in the next steps.
In this example, it is arn:aws:iam::373699066757:role/ECSTaskWritetoLogs
Go to EC2 > Security Groups and click on "Create security group"
Depending on what region your Keeper tenant is hosted, you need to create inbound rules that allow https port 443. The list of IPs for each tenant location is on this page. In the example below, this is the US data center.
We also recommend adding your workstation's external IP address for testing and troubleshooting.
Assign a name like "MyAutomatorService" and then click "Create".
After saving the security group, edit the inbound rules again. This time, add HTTPS port 443 and select the security group in the drop-down. This will allow the load balancer to monitor health status and distribute traffic.
We'll create another security group that controls NFS access to EFS from the cluster.
Go to EC2 > Security Groups and click on "Create security group"
Set a name such as "MyAutomatorEFS".
Select Type of "NFS" and then select Custom and then the security group that was created in the prior step for the ECS cluster. Click "Create security group".
Note the security group ID for the next step. In this case, it is sgr-089fea5e4738f3898
At present, the Automator service needs access to two different folders. In this example setup, we are creating one volume to store the SSL certificate and SSL passphrase files. The second volume stores the property file for the Automator service. These 3 files are in your Keeper record.
Go to AWS > Elastic File System and click Create file system
Call it "automator_config" and click Create
Again, go to Elastic File System and click Create file system. Call this one automator_settings and click Create.
Note the File system IDs displayed. These IDs (e.g. fs-xxx) will be used in the ECS task definition.
After a minute, the 2 filesystems will be available. Click on each one and then select the "Network" tab then click on "Manage".
Change the security group for each subnet to the one created in the above step (e.g. "MyAutomatorEFS") and click Save. Make this network change to both filesystems that were created.
Navigate to the Amazon Elastic Container Service.
Select "Create cluster" and assign the cluster name and VPC. In this example we are using the default "AWS Fargate (serverless)" infrastructure option.
The Default namespace can be called "automator"
The "Infrastructure" is set to AWS Fargate (serverless)
Click Create
In your favorite text editor, copy the below JSON task definition file and save it.
Make the following changes to the JSON file:
Change the XXXCONFIGXXX value to a base64 config from Keeper Secrets Manager created in the beginning of this guide
Change the 3 instances of "XXXXX" to the Record UID containing the SSL certificate, SSL certificate password and settings file in your vault shared folder which KSM is accessing.
Change the two File system IDs (fs-XXX) to yours from the above steps
Change the XXX for the role ID as specific to your AWS role
Change the eu-west-1 value in two spots to the region of your ECS service
Next, go to Elastic Container Service > Task definitions > Create Task from JSON
Remove the existing JSON and copy-paste the contents of the JSON file above into the box, then click Create.
This task definition can be modified according to your instance CPU/Memory requirements.
In order for an application load balancer in AWS to serve requests for Automator, the SSL certificate must be managed by the AWS Certificate Manager.
Go to AWS Certificate Manager and Click on Import
On your workstation, we need to convert the SSL certificate (.pfx) file to a PEM-encoded certificate body, PEM-encoded private key and PEM-encoded certificate chain.
Since you already have the .pfx file, this can be done using the below openssl commands:
Download the ssl-certificate.pfx file and the certificate password locally to your workstation and enter the below commands:
Generate the PEM-encoded certificate body
Generate the PEM-encoded private key
Generate the PEM-encoded certificate chain
Copy the contents of the 3 files into the screen, e.g.
Go to EC2 > Target Groups and click Create target group
Select "IP Addresses" as the target type
Enter the Target group name of "automatortargetgroup" or whatever you prefer
Select HTTPS Protocol with Port 443
Select the VPC which contains the ECS cluster
Select HTTP1
Under Health checks, select the Health check protocol "HTTPS"
Type /health
as the Health check path
Click Next
Don't select any targets yet, just click Create target group
Go to EC2 > Load balancers > Create load balancer
Select Application Load Balancer > Create
Assign name such as "automatornalb" or whatever you prefer
Scheme is "Internet-facing"
IP address type: IPv4
In the Network Mapping section, select the VPC and the subnets which will host the ECS service.
In the Security groups, select "MyAutomatorService" as created in Step 4.
In the Listeners and routing section, select HTTPS port 443 and in the target group select the Target group as created in the prior step (automatortargetgroup).
In the Secure listener settings, select the SSL certificate "from ACM" that was uploaded to the AWS Certificate Manager in Step 9.
Click Create load balancer
Go to Elastic Container Service > Task definitions > Select the task created in Step 8.
From this Task definition, click on Deploy > Create Service
Select Existing cluster of "automator"
Assign Service name of "automatorservice" or whatever name you prefer
Important: For the number of Desired tasks, set this to 1 for right now. After configuration, we will increase to the number of tasks you would like to have running.
Under Networking, select the VPC, subnets and replace the existing security group with the ECS security group created in Step 4. In this case, it is called "MyAutomatorService".
For Public IP, turn this ON.
Under Load balancing, select Load balancer type "Application Load Balancer"
Use an existing load balancer and select "automatoralb" created in Step 11.
Use an existing listener, and select the 443:HTTPS listener
Use an existing target group, and select the Target Group from Step 10
Click Create
After a few minutes, the service should start up.
Assuming that the DNS name is hosted and managed by Route53:
Go to Route53 > Create or Edit record
Create an A-record
Set as "Alias"
Route traffic to "Alias to Application and Classic Load Balancer"
Select AWS Region
Select the "automatoralb" Application Load Balancer
Select "Simple Routing"
Select "Save"
The next step is to configure Automator using Keeper Commander while only having one task running.
At this point, the service is running but it is not able to communicate with Keeper yet.
On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here:
Installing Keeper Commander
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For automated health checks, you can use the below URL:
https://<server>/health
Example curl
command:
In this example setup, the load balancer will be sending health
checks to the target instances.
Now that Keeper Automator is deployed with a single task running, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
Assuming that the approval worked, you can now increase the number of tasks running.
From the ECS screen, open the automator service and click "Update Service". Then set the number of tasks that you would like to have running.
The Automator logs can be searched and monitored in the "Logs" tab of the ECS service, or in CloudWatch.
Automatic device approval service for SSO Connect Cloud environments
The Keeper Automator is an optional service which performs instant device approvals, team approvals and team user assignments upon a successful login from the SSO identity provider.
Once Automator is running, users can seamlessly access Keeper on a new (not previously approved) device after a successful authentication with your identity provider, without any further approval steps.
If the Automator service is not set up, users and admins can still perform device approvals through Push Approval methods.
Keeper Automator is a lightweight service that can be deployed in your cloud or on-prem environment, in many different ways.
Keeper SSO Connect provides seamless authentication into the Keeper vault using your identity provider. Normally a user must a approve their new device, or an Admin can approve a new device for a user. The Automator service is totally optional, created for Admins who want to remove any friction associated with device approvals.
To preserve Zero Knowledge and automate the transfer of the Encrypted Data Key (EDK) to the user's device, a service must be run which is operated by the Enterprise (instead of hosted by Keeper). The service can be run several different ways, either in the cloud or self-hosted.
An in-depth explanation of SSO Connect encryption model is documented here.
Using the Automator service creates a frictionless experience for users, however it requires that you have fully secured your identity provider.
Please refer to our Recommended Security Settings guide to securing your Keeper environment.
Depending on your environment, select from one of the following installation methods. The Azure Container App, Azure App Services, AWS Elastic Container Service and Google Cloud with GCP Cloud Run are the best choices if you use one of these cloud services.
Running the Keeper Automator service on the Google Cloud platform with Cloud Run
This guide provides step-by-step instructions to run the Keeper Automator service on Google Cloud, specifically using the GCP Cloud Run service. The Automator is also protected by the Google Armor service in order to restrict access to Keeper's infrastructure IPs.
From the Google Cloud console (https://console.cloud.google.com) create a new project.
Then click "Select Project" on this new project.
For this documentation, we'll use the Google Cloud Shell from the web interface. Click to activate the Cloud Shell or install this on your local machine.
Note the Project ID, which in this case is keeper-automator-439714
. This Project ID will be used in subsequent commands.
If you haven't done so, you must link a valid Billing account to the project. This is performed in the Google Cloud user interface from the Billing menu.
From the Cloud Shell, generate a 256-bit AES key in URL-encoded format:
Example key: 6C45ibUhoYqkTD4XNFqoZoZmslvklwyjQO4ZqLdUECs=
Save the resulting Key in Keeper. This will be used as an environment variable when deploying the container. This key ensures that ephemeral containers will be configured at startup.
You need to select a region for the service to run. The available region codes can be found by using the following command:
For this example, we will use us-east1
Run the below 2 commands, replacing "us-east1" with your preferred value from Step 5
Create a file called cloudbuild.yaml
that contains the following content, ensuring to replace the string "us-east1
" with your preferred location from Step 5. Leave all other content the same.
Replace us-east1
with your preferred location from Step 5
Upload this file through the Cloud Shell user interface, or create the text file in the cloud shell.
From the Cloud Shell, execute the following:
Then execute the build:
This will sync the latest Automator container to your Google Artifact Registry.
The following command will deploy the Keeper Automator service to Google Cloud Run from your Artifact Registry. This service is limited to internal access and load balancers only.
Note the following:
[PROJECT_ID]
needs to be replaced by your Project ID as found in Step 2
XXX
is replaced with the configuration key that you created in Step 3 above.
AUTOMATOR_PORT
tells the container to listen on port 8080
SSL_MODE
allows the SSL connection to terminate with the load balancer
DISABLE_SNI_CHECK
allows the request to complete behind the load balancer
The mininum number of instances is 1, which is acceptable in most environments.
If min/max is not set, the service will drop to zero instances and startup on the first request
The Keeper system is going to communicate with your Automator service through a publicly routable DNS name. In this example, I'm using gcpautomator.lurey.com
. In order to set this up, you need to first create a managed SSL certificate. The command for this is below.
Replace gcpautomator.lurey.com
with your desired name
The next command links the Cloud Run service to a Google Cloud Load Balancer.
Replace us-east1
with the region of your Cloud Run service from Step 5.
This creates a backend service that links to the Cloud Run service:
This attaches the NEG to the backend service.
Replace us-east1
with the desired location specified in Step 5
Get the IP address and note for later:
The IP address must be mapped to a valid DNS.
In your DNS provider, set up an A-record pointing to the IP.
This step is important. Ensure that the desired domain name is pointing to the IP address provided. This step must be performed in your DNS provider directly.
Create a global forwarding rule to direct incoming requests to the target proxy:
The Keeper Automator service should be restricted to only the necessary IPs as discussed on the Ingress Requirements page.
Let's create a Cloud Armor Security Policy to restrict access to certain IP addresses
In this step, we will attach IPs Keeper's US Data Center as found in this page. Additional rules can be created as you see fit.
We recommend adding your external IP to this list, so that you can test the Automator service
We will also add a default "deny" rule to restrict other traffic:
Finally, attach the Cloud Armor security policy to the backend service
At this point, the Automator service should be running and the service should be exposed only to the Keeper infrastructure.
The next step is to finish the configuration with the Keeper Commander utility.
Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.
On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Create the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. This will be populated with the automator URL.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
NOTE: Replace gcpautomator.lurey.com
with the domain name you created in Step 15
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
To update the container in Google when there is a new version available from Keeper, run the following commands:
If you need assistance, please email commander@keepersecurity.com or open a support ticket.
Keeper Automator sample implementation using standalone Java service
This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker.
Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.
In preparation of the service, ensure that at least Java 17 is installed. In a standard Amazon AWS Linux 2 instance, the Java 17 SDK can be installed using the below command:
To check which version is running, type:
From the Automator instance, download and unzip the Keeper Automator service:
If the folder does not exist, create the a "config" folder in the extracted location.
Upload the .pfx file created in the Create Certificate page to the Automator's config/
folder and make sure the filename is called ssl-certificate.pfx
.
For example, using scp:
If your ssl-certificate.pfx
file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt
and place it into the docker container:
For example:
From the Automator instance, start the service using java -jar
. In this example below, it is run in the background using nohup
.
On Windows command line or powershell, the command must be executed exactly per below:
Confirm the service is running through a web browser (note that port 443 must be opened from whatever device you are testing) In this case, the URL is: https://<server>/health
This URL can also be used for automated health checks.
Example:
Now that the service is running, you need to integrate the Automator into your environment using Keeper Commander.
Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.
On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Next, send other IdP metadata to the Automator:
Enable the Automator service
At this point, the configuration is complete.
When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
We recommend restricting network access to the service. Please see the Ingress Requirements section for a list of IP addresses to allow.
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
When you stop/start the Keeper Automator service, or if you restart the server, you may need to use Keeper Commander to re-initialize the service endpoint.
Please check the Keeper Automator logs. This usually describes the issue. On Linux, the logs are located in the install directory.
When you reconfigure the Keeper Automator service, you may need to use Keeper Commander to re-initialize the service endpoint. (Keeper Commander documentation is linked here).
The commands required on Keeper Commander to re-initialize your Automator instance are below:
Simple Deployment with Azure Container App
This guide provides step-by-step instructions to publish Keeper Automator to the Azure Container App service. This provides a simple and straightforward way to host the Automator service in the cloud.
For environments such as Azure Government, GCC High and DoD, use the Azure App Services method, since the Azure Container App service may not be available in those regions.
Open a command line interface and generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:
Save the resulting value produced by this command for Step (3).
From Azure, create a new Container App.
Select or create a new Resource Group
Set the Container App Name to "keeperautomator" or whatever you prefer
Select "Container Image" as the Deployment Source
Select the region where you would like the service hosted
Create a new Apps Environment or select an existing environment
Click Next : Container >
In the "Container" step, make the following selections:
Uncheck the "Use quickstart image"
Select "Docker Hub or other registries"
Select "Public"
Select Registry login server as docker.io
Set the Image and tag as keeper/automator:latest
Skip to "Container resource allocation"
For CPU and Memory, 0.5 CPU cores and 1Gi memory is sufficient, but this can be updated based on your volume of new device logins.
Create an environment variable called AUTOMATOR_CONFIG_KEY
with the value from Step 1 above of the setup guide.
Create an environment variable called AUTOMATOR_PORT
with the value of 8089
Create an environment variable called SSL_MODE
with the value of none
Click "Next : Ingress >"
On the Ingress setup screen, select the following:
Enable
Ingress
Ingress traffic Accepting traffic from anywhere
(we'll modify this in a later step)
Ingress type HTTP
Target port set to 8089
Click "Review + Create" and then click "Create"
After a few minutes, the container app will be created and automatically start up.
Clicking on "Go to Resource" will take you to the container environment.
To restrict communications to the Keeper Automator service, click on the "Ingress" link on the left side of the screen under the "Settings" section
Click on "Ingress"
Select "Allow traffic from IPs configured below, deny all other traffic"
Click "Add" to add two of Keeper's IPs and any of your IPs required for testing the service.
Click Save
If you want to be able to run a health check, then consider adding your own IP address. Find your IP address at https://checkip.amazonaws.com
In order to prevent Azure from downscaling to zero instances, it's important to set the minimum number of instances to 1.
Navigate to the "Containers" section under the "Application"
Click on the "Edit and deploy" section at the top and then navigate to the Scale section. Set the Min and Max replica to "1"
Next, click on the "Container" tab
Click on the container name link, in this case "keeperautomator" at the bottom
Navigate to Health Probes and enter the following under each section:
Under "Liveness probes":
Enable liveness probes
Transport: HTTP
Path: /health
Port: 8089
Initial delay seconds: 5
Period seconds: 30
Under "Startup probes":
Enable startup probes
Transport: HTTP
Path: /health
Port: 8089
Initial delay seconds: 5
Period seconds: 30
Under "Volume Mounts":
Select "Create new volume"
Add volume type automatordata
Add Mount Path as /usr/mybin/config
Finish the configuration
Click on Save
Then click on Create
to build the new configuration
After a few minutes, the new containers should start up
From the Overview section of the Container App, on the right side is the "Application URL" that was assigned. Copy this and use this Application URL in the next step.
For example, https://craigautomator1.xyx-1234.azurecontainerapps.io
Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.
On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Create the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. This is the Application URL from Step 8.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For external health checks, you can use the below URL:
https://<server>/health
Example curl
command:
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
Azure Container Apps have many advanced capabilities that are beyond the scope of this documentation. A few of the capabilities are provided below.
If you would like to have multiple containers running the Keeper Automator service:
Click on "Scale and replicas"
Click "Edit and deploy"
Click on the "Scale" tab
Select the min and max number of containers. The minimum should be at least 1.
Click Create
After a minute, the new version will deploy
Run automator setup xxx
multiple times (one for each container)
Run automator init xxx
multiple times (one for each container)
The Keeper Automator logs can be viewed and monitored using the "Console" or "Log stream" section.
For example, to tail the log file of a running Automator service:
Click on Console
Select "/bin/sh"
Click Connect
At the prompt, type: tail -f logs/keeper-automator.log
Environment variables can be passed into the Container to turn on/off features of the runtime environment. The variables with their description can be found at the Advanced Settings page.
Keeper Automator sample implementation on a Windows server
The instructions on this page are created for customers who would like to simply run the Automator service on a Windows server without Docker.
Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.
On the Automator instance, download, unzip and run the Keeper Automator installer:
https://keepersecurity.com/automator/keeper-automator-windows.zip
In the setup screens, check the "Java" box to ensure that the Java runtime is embedded in the installation. Currently it ships with the Java 17 runtime, and this is updated as new versions are released.
This will install Keeper Automator into:
C:\Program Files\Keeper Security\Keeper Automator\
The configuration and settings will be set up in:
C:\ProgramData\Keeper Automator\
In the C:\ProgramData\Keeper Automator\ folder please create a folder called "config".
Place ssl-certificate.pfx
file (from the Custom SSL Certificate page) to the Automator Configuration settings folder in C:\ProgramData\Keeper Automator\Config
If your ssl-certificate.pfx
file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt
in the folder C:\ProgramData\Keeper Automator\Config
From the Services screen, select Keeper Automator and Restart the the service.
Confirm the service is running through a web browser (note that port 443 must be opened from whatever device you are testing) In this case, the URL is: https://automator.company.com/api/rest/status
For automated health checks, you can also use the below URL:
https://automator.company.com/health
If you are deploying on Windows running Defender Firewall, most likely you will need to open port 443 (or whatever port you specified) on Windows Defender Firewall. Follow these steps:
Open the Start menu > type Windows Defender Firewall, and select it from the list of results. Select Advanced settings on the side navigation menu... Select Inbound Rules. To open a port, select New Rule and complete the instructions.
Here's a couple of screenshots:
Now that the service is running, you can integrate the Automator into your Keeper environment using Keeper Commander.
(5) Install Keeper Commander
On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
(6) Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
and name the automator whatever you want.
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. So let's do that next.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
If an error is generated on this step, please stop and start the Windows service, and ensure that the port is available.
Next, initialize the Automator with the new configuration with the command below:
Lastly, enable the Automator service with the following command:
At this point, the configuration is complete.
When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
When you reconfigure the Keeper Automator service, you'll need to use Keeper Commander to re-initialize the service endpoint.
Please check the Keeper Automator logs. This usually describes the issue. On Windows, they can be found in C:\ProgramData\Keeper Automator\logs\
When you reinstall the Keeper Automator service, you'll need to use Keeper Commander to re-initialize the service endpoint. (Keeper Commander documentation is linked here).
The commands required on Keeper Commander to re-initialize your Automator instance:
Installation of Keeper Automator as a Kubernetes service
This guide provides step-by-step instructions to publish Keeper Automator as a Kubernetes service.
Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.
Installation and deployment of Kubernetes is not the intent of this guide, however a very basic single-node environment using two EC2 instances (Master and Worker) without any platform dependencies is documented here for demonstration purposes. Skip to Step 2 assuming you already have your K8 environment running.
Kubernetes requires a container runtime, and we will use Docker.
These packages need to be installed on both master and worker nodes. The example here is using AWS Amazon Linux 2 instance types.
On the machine you want to use as the master node, run:
The --pod-network-cidr
argument is required for certain network providers. Substitute the IP range you want your pods to have.
After kubeadm init
completes, it will give you a command that you can use to join worker nodes to the master. Make a note of the response and initialization code for the next step.
Set up the local kubeconfig:
You need to install a Pod network before the cluster will be functional. For simplicity, you can use flannel
:
On each machine you want to add as a worker node, the command below with the initialization code.
Note that port 6443 must be open between the worker and master node in your security group.
After the worker has been joined, the Kubernetes cluster should be up and running. You can check the status of your nodes by running kubectl get nodes
on the master.
The SSL certificate for the Keeper Automator is provided to the Kubernetes service as a secret. To store the SSL certificate and SSL certificate password (created from the SSL Certificate guide), run the below command:
Below is a manifest file that can be saved as automator-deployment.yaml
. This file contains configurations for both a Deployment resource and a Service resource.
The deployment resource runs the Keeper Automator docker container
The SSL certificate and certificate password files are referenced as a mounted secret
The secrets are copied over to the pod in an initialization container
The Automator service is listening on port 30000 and then routes to port 443 on the container.
In this step, we are only deploying a single container (replicas: 1) so that we can configure the container, and we will increase the number of replicas in the last step.
The service should start up within 30 seconds.
Confirm the service is running through a web browser (note that port 30000 must be opened from whatever device you are testing). In this case, the URL is: https://automator2.lurey.com:30000/api/rest/status
For automated health checks, you can also use the below URL:
https://<server>/health
Example:
Now that the service with a single pod is running, you need to integrate the Automator into your environment using Keeper Commander.
Keeper Commander is required to configure the pod to perform automator functions. This can be run from anywhere.
On your workstation, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. So let's do that next.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Next, send other IdP metadata to the Automator:
Enable the Automator service
At this point, the configuration is complete.
We recommend limiting network access to the service from Keeper's servers and your own workstation. Please see the Ingress Requirements section for a list of Keeper IP addresses to allow.
To ensure that the Automator service is working properly with a single pod, follow the below steps:
Open a web browser in an incognito window
Login to the Keeper web vault using an SSO user account
Ensure that no device approvals are required after successful SSO login
At this point, we are running a single pod configuration. Now that the first pod is set up with the Automator service and configured with the Keeper cloud, we can increase the number of pods.
Update the "replicas" statement in the YAML file with the number of pods you would like to run. For example:
Then apply the change:
With more than one pod running, the containers will be load balanced in a round-robin type of setup. The Automator pods will automatically and securely load their configuration settings from the Keeper cloud upon the first request for approval.
The log files running the Automator service can be monitored for errors. To get a list of pods:
Connect via terminal to the Automator container using the below command:
The log files are located in the logs/ folder. Instead of connecting to the terminal, you can also just tail the logfile of the container from this command:
Deployment with Azure App Services
This guide provides step-by-step instructions to instantiate Keeper Automator as a Web App within Azure App Services. For environments such as GCC High and DoD, this service is available for hosting the Automator.
Open a command line interface and generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:
Save the resulting value produced by this command for Step (6).
From the Azure portal, create a new Web App by selecting App Services in the search bar and then selecting Create + Web App
Select or create a new Resource Group
Set the Instance Name
Set Publish to "Docker Container"
Set Operating System to "Linux"
Select the region where you would like the service hosted
Select your Linux Plan or create a new plan. Pricing plan at a minimum should be Premium V3 P0V3, but will also be dependent on the end user environment
Proceed to the Docker
section
In the "Docker" step, make the following selections:
Options: "Single Container"
Image Service: "Docker Hub"
Access Type: "Public"
Image and tag: keeper/automator:latest
Proceed to the Monitoring
section
Select "Enable Application Insights": Yes
Select or create a new Application Insights workspace
Proceed to the Review + create
section
Click "Review + Create" and then click "Create"
After a few minutes, the web app will be created and automatically start up.
Clicking on "Go to Resource" will take you to the container environment.
Make note of the Default domain value. This will be needed to setup and initialize the Automator service
Go to the Configuration section and select "New application setting"
Or your environment variables settings may be in a different section of the UI under Environment variables.
Add the following application settings:
Create the below environment variables with their respective values:
AUTOMATOR_CONFIG_KEY -> "value from Step 1 above of the setup guide"
AUTOMATOR_PORT -> 8089
SSL_MODE -> none
WEBSITES_PORT -> 8089
Click Apply
Select Diagnostic settings and then select "+ Add diagnostic setting"
Give the diagnostic setting a name.
Select "App Service Console logs"
Select "App Service Application logs"
Select "Send to Log Analytics workspace"
Select or setup a new Log Analytics workspace
Select Logs from the main menu. Click the "X" to close the Queries window.
To see the Docker deployment and startup logs: AppServicePlatformLogs
To see the application error logs: AppServiceConsoleLogs
Select App Service Logs from the main menu under the Monitoring section. Then select File System under Application logging and set a retention per user's preference
Click Save
Select Log Stream from the main menu under the Overview section to verify the Automator service is connected and logging correctly
Select Health check from the main menu under the Monitoring section. Then Enable the health check function and set the Path value to "/health". Click Save to save the configuration, and Save again to confirm changes.
In the Networking section you can setup simple access rules or configure Azure Front Door.
Select Networking from the main menu and click on "Enabled with no access restrictions"
Under Access Restrictions, select "Enabled from select virtual networks and IP addresses" and "Allow" unmatched rule action. Click +Add to add inbound access rules.
Under Add Rule, add the inbound firewall rules. You should restrict traffic to the Keeper published IP addresses marked as "Connection verification only" for your respective region per the page below.
Click Add Rule
Click Save to save the configurations
Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.
Create the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For external health checks, you can use the below URL:
https://<server>/health
Example curl
command:
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
How to deploy Keeper Automator in a simple docker environment
This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker.
(1) Install Docker
If you don't have Docker installed, set it up per the instructions on your platform. For example, if you use the yum package installer:
Start the Docker service if it's not running
Then configure the service to start automatically
To allow non-root users to run Docker (and if this meets your security requirements), run this command:
(2) Pull the Automator image
Use the docker pull
command to get the latest Keeper Automator image.
(3) Start the service
Use the command below to start the service. This example below is listening to port 443.
(4) Update the certificate
Inside the docker container, create a "config" folder.
If your .pfx file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt
and place it into the docker container:
(5) Restart the container with the SSL cert
Now that the certificate is installed, restart the Docker container:
(6) Install Keeper Commander
At this point, the service is running but it is not able to communicate with Keeper yet.
(7) Initialize with Commander
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. This is the public URL which the Keeper backend will communicate with. For example, automator.mycompany.com.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For automated health checks, you can use the below URL:
https://<server>/health
Example:
When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
When you stop/start the Keeper Automator service, the Docker service will automatically retain state.
Please check the Keeper Automator logs. This usually describes the issue. In the Docker environment, you can tail the log file using this command:
Connecting to the container to check the log file is possible using the below command:
Setting up multiple tenants in a single Automator instance
Keeper Automator supports a multi-tenant configuration, so that a single deployment can perform automations for multiple Keeper Enterprise environments.
For MSP environments, a single Keeper Automator instance can be used to run multiple Managed Companies.
For Enterprise customers, a single instance can process approvals for any number of identity providers.
Once the server is running, you can use it for multiple SSO nodes, even in different enterprises.
The steps for activating one Automator instance for multiple Managed Companies is below:
(1) Login to Commander as the MSP Admin
(2) Switch context to the Managed Company
Find the MC you want to set up, select the ID and then type:
(3) Create an Automator instance
Use the common Automator URL in the "edit" step
For example:
(4) Switch back to MSP
Switch back to the MSP Admin context
For each Managed Company, repeat the above 4 steps.
The steps for activating one Automator instance for multiple Nodes in the same Enterprise tenant is below:
(1) Login to Commander as Admin
(2) Create the Automator Instance
For each Node, use the same "edit" URL. For example:
Then, simply set up another instance with the same URL endpoint:
Note that they have different names and IDs and are assigned to different nodes but they use the same URL.
Repeat step (2) for every node to set up multiple tenants on the same Automator instance.
How to configure Keeper Automator with a custom SSL certificate
Keeper Automator encrypts the communication between the Keeper backend and the Automator service running in the customer's environment.
If a custom certificate is not used, Keeper Automator will generate a self-signed certificate by default.
If SAML is configured to encrypt the request (not just signing), a custom SSL certificate is required.
Keeper Automator requires a valid signed SSL certificate that has been signed by a public certificate authority. The process of generating an SSL certificate varies depending on the provider, but the general flow is documented here.
Follow these steps to create the two certificate files needed for automator to run, which must be named ssl-certificate.pfx
and ssl-certificate-password.txt
(1) Using the openssl command prompt, generate a private key
(2) Generate a CSR, making sure to use the hostname which you plan to use for Automator. In this case, we will be using automator.lurey.com
. The important item here is that the Common Name matches exactly to the domain.
(3) Purchase an SSL certificate (or grab a free 90 day cert) and Submit the CSR to your SSL certificate provider.
Ensure that the SSL certificate created for your Automator instance is only used for this purpose. Do not use a wildcard certificate that is shared with other services.
Choose a URL and create a certificate for a domain that is specific for Automator, e.g. automator.company.com.
The SSL certificate provider will deliver you a zip file that contains a signed certificate (.crt file) and intermediate CA cert. The bundle may be in either .crt or .ca-bundle file extension type. Unzip this file into the same location as your .key file that you created earlier.
(4) After the certificate has been issued, it needs to be converted using OpenSSL to .pfx
format including the full certificate chain (root, intermediate and CA cert).
Set your export password when prompted. Then create a new text file called ssl-certificate-password.txt and put the export password into that file and save it.
automator.key
is the private key generated in step 1.
automator.yourcompany.com.crt
is the signed certificate delivered in step 3.
automator.yourcompany.com.ca-bundle
is the CA bundle
ssl-certificate.pfx
is the output file used by Automator that has been encrypted with a password.
ssl-certificate-password.txt
contains the password used to encrypt the .pfx file.
We recommend to save all 5 files in your Keeper vault.
Ensure that your .pfx file contains your issued cert AND the full certificate chain from your provider. If you don't provide a full certificate chain, the communication will fail and Automator will be unable to connect to your URL.
To check the .pfx, use openssl:
openssl pkcs12 -in ssl-certificate.pfx -info
If the .pfx is correct, you will see 3 certificates.
If you only see one certificate, or if you see four or five certificates, the .pfx is incorrect and you need to repeat the process.
(5) Save ssl-certificate.pfx
and ssl-certificate-password.txt
for the deployment steps later in this guide.
Please also ensure that you have backed up the files in your Keeper vault so that you can refer to these later when updating the service or re-keying the certificate.
Keeper Automator requires a valid signed SSL certificate that has been signed by a public certificate authority. We do not support self-signed certificates. The process of generating an SSL certificate varies depending on the provider, but the general flow is documented here.
Download and install OpenSSL. For convenience, a 3rd party (slproweb.com) has created a binary installer. A popular binary installer is linked below:
During install, the default options can be selected. In the install process, you may be asked to also install a Microsoft Visual Studio extension. Go ahead and follow the instructions to install this extension before completing the OpenSSL setup.
Run the OpenSSL Command Prompt
In your Start Menu there will be an OpenSSL folder. Click on the Win32 OpenSSL Command Prompt.
On an annual basis, you will need to renew your SSL certificate. Most certificate providers will generate a new cert for you. After certificate renewal, replace the .pfx
certificate file in your Automator instance and then restart the service. Refer to the specific automator install method documentation on the exact process for updating the file and restarting the service.
If you are using Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
After certificate renewal, sometimes it is necessary to publish a new SP certificate in your identity provider following the below steps:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert" and save the certificate file.
Click on "Export Metadata" and save the metadata file, which also contains the certificate.
Login to your Identity Provider portal and view the SSO configuration for Keeper.
Upload Keeper's SP certificate file (or metadata, if required) following their instructions to update the Service Provider certificate and Save.
The reason for this, is because the Automator service essentially becomes the service provider. The SSL certificate generated by the customer is used in the signing process.
If you are updating the SSL certificate in an environment that utilizes application gateways or a load balancer with a custom domain that terminates SSL, you need to also update the certificate on that device.
For Azure deployments using an App Gateway, the .pfx
certificate must also be updated in the https listener for the gateway. Go to your Azure > Resource groups > App Gateway > Listeners and upload the new certificate.
Common issues and troubleshooting for your Automator service
There are several reasons why Keeper Commander is unable to communicate with your Automator service:
Ensure that the automator service is open to Keeper’s IP addresses. The list of IPs that must be open are found at page. We recommend also adding your own IP address so that you can troubleshoot the connection.
If you are using a custom SSL certificate, ensure that the SSL certificate is loaded. Check the Automator log files which will indicate if the certificate is loaded by the service restart. If the IP address is open to you, you can run a health check on the command line using curl, for example:
curl https://automator.mycompany.com/health
Check that the subject name of the certificate matches the FQDN.
Check that your SSL certificate includes the CA intermediate certificate chain. This is the most common issue that causes a problem. Keeper will refuse to connect to Automator if the intermediate certificate chain is missing. You can do this using openssl like the following:
This command will clearly show you the number of certificates in the chain. If there's only a single cert, this means you did not load the full chain. To resolve this, see Step 4 of the step by step instructions page.
This may occur if the healthcheck request URI does not match the SSL certificate domain. To allow the healthcheck to complete, you need to disable SNI checks on the service. This can be accomplished by setting the disable_sni_check=true in the Automator configuration or passing in the environmental variable DISABLE_SNI_CHECK with the value of "true".
Installation of Keeper Automator using the Docker Compose method
This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker.
Make sure you already have your SSL Certificate! If not, please follow the steps in the page.
Data is preserved between container updates
Future updates are simple to install and maintain
Instructions for installing Automator using the Docker Compose method are below.
Instructions for installing Docker and Docker Compose vary by platform. Please refer to the official documentation below:
For Amazon Linux 2 instances, a good tutorial on docker-compose installation is here:
Note: The new version of Docker Compose is run using the command:
docker compose
The older version uses a dash, e.g.:
docker-compose
After installing, you may still need to start the Docker service, if it's not running.
Then configure the service to start automatically
To allow non-root users to run Docker (and if this meets your security requirements), run this command:
Save the snippet below as the file docker-compose.yml
on your server, in the location where you will be executing docker compose commands.
At this point, the service is running but it is not able to communicate with Keeper yet.
(7) Initialize with Commander
Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.
The output of the command will display the Automator settings, including metadata from the identity provider.
Note that the "URL" is not populated yet. Edit the URL with the FQDN you selected.
Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team
, team_for_user
and device
).
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
Initialize the Automator with the new configuration
Enable the service
At this point, the configuration is complete.
For automated health checks, you can use the below URL:
https://<server>/health
Example:
The Automator logs can be monitored by using the Docker Compose command:
When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:
Login to the Keeper Admin Console
Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.
Click on "Export SP Cert".
In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.
On the "Encryption" tab, replace the old certificate with this new cert.
On the "Signature" tab, Add/Replace the new SP certificate with this new cert.
Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.
The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.
Commander Approvals
Keeper Commander, our CLI and SDK platform is capable of performing Admin Device Approvals for automated approval without having to login to the Admin Console. Admin approvals can be configured on any computer that is able to run Keeper Commander (Mac, PC or Linux).
This method does not require inbound connections from the Keeper cloud, so it could be preferred for environments where ingress ports cannot be opened. This method uses a polling mechanism (outbound connections only).
Please see the Installation Instructions here:
You can install the binary versions for Mac/PC/Linux or use pip3
.
Enter the Commander CLI using the "keeper shell" command. Or if you installed the Commander binary, just run it from your computer.
Use the "login" command to login as the Keeper Admin with the permission to approve devices. Commander supports SSO, Master Password and 2FA. For the purpose of automation, we recommend creating a dedicated Keeper Admin service account that is specifically used for device approvals. This ensures that any changes made to the user account (such as policy enforcements) don't break the Commander process.
Type "device-approve" to list all devices:
To manually approve a specific device, use this command:
To approve all devices that come from IPs that are recognized as successfully logged in for the user previously, use this command:
To approve all devices regardless of IP address, use this command:
To deny a specific device request, use the "deny" command:
To deny all approvals, remove the Device ID parameter:
To reload the latest device approvals without having to exit the shell, use the "reload" command:
Commander supports an automation mode that will run approvals every X number of seconds. To set this up, modify the config.json
file that is auto-created. This file is located in the OS User's folder under the .keeper
folder. For example: C:\Users\Administrator\.keeper\config.json
on Windows or /home/user/.keeper/config.json
on Mac/Linux.
Leave the existing data in the file and add the following lines :
JSON files need a comma after every line EXCEPT the last one.
Now when you open Commander (or run "keeper shell"), Commander will run the commands every time period specified. Example:
Similar to the example above, Commander can automatically approve Team and User assignments that are created from SCIM providers such as Azure, Okta and JumpCloud.
To set this up, simply add one more command team-approve
to the JSON config file. For example:
Keeper Commander supports "persistent login" sessions which can run without having to login with a Master Password or hard-code the Master Password into the configuration file.
Commands to enable persistent login on a device for 30 days (max):
You can use seconds as the value (e.g. 60 for 60 seconds) or numbers and letters (e.g. 1m for one minute, 5h for 5 hours, and 7d for 7 days).
Also note that typing "logout" will invalidate the session. Just "quit" the Commander session to exit.
Once persistent login is set up on a device, the config.json
in the local folder will look something like this:
Your installation in Azure is complete.
Keeper Tenant Region | IP1 | IP2 |
---|---|---|
Type | Name | Value | TTL |
---|---|---|---|
Keeper Tenant Region | IP1 | IP2 |
---|---|---|
On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here:
After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
Note that the "URL" is not populated yet. This is the Default Domain value from .
Make sure you already have your SSL Certificate! If not, please follow the steps in the page.
Copy the ssl-certificate.pfx
file created in the into the container.
On your workstation, server or any computer, install the Keeper Commander CLI. The installation instructions including binary installers are here:
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
We recommend restricting network access to the service. Please see the page for a list of IP addresses to allow.
You can obtain a quick, easy, and free SSL certificate at . Or if you prefer to have more control over each step of the process, you can proceed with the following instructions.
If you don't have a provider already, you can use is: . The least expensive SSL cert for one domain is fine.
On , make sure to launch the OpenSSL command prompt and navigate to the folder that has your files.
(6) Review the annual certificate update process .
Install the version at the bottom labeled "Win32 OpenSSL vX.X.X Light"
On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here:
After Commander is installed, you can type keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.
We recommend restricting network access to the service. Please see the section for a list of IP addresses to allow.
Additional information about persistent login sessions and various options is available .
There are many ways to customize, automate and process automated commands with Keeper Commander. To explore the full capabilities see the .
US
54.208.20.102/32
34.203.159.189/32
US GovCloud
18.252.135.74/32
18.253.212.59/32
EU
52.210.163.45/32
54.246.185.95/32
AU
3.106.40.41/32
54.206.208.132/32
CA
35.182.216.11/32
15.223.136.134/32
JP
54.150.11.204/32
52.68.53.105/32
A
gcpautomator.lurey.com
xx.xx.xx.xx
60
US
54.208.20.102/32
34.203.159.189/32
US GovCloud
18.252.135.74/32
18.253.212.59/32
EU
52.210.163.45/32
54.246.185.95/32
AU
3.106.40.41/32
54.206.208.132/32
CA
35.182.216.11/32
15.223.136.134/32
JP
54.150.11.204/32
52.68.53.105/32
Instructions for upgrading your Automator instance to v3.2
Version 3.2+ incorporated several new features:
Team Approvals (Team Creation)
Team User Approvals (Assigning Users to Teams)
All settings can be configured as environment variables
Support for simplified Azure Container App deployment
Support for simplified AWS ECS Service deployment
HSTS is enabled for improved HTTPS security
IP address filtering for device approval and team approval
Optional rate limiting for all APIs
Optional filtering by email domain
Optional binding to specific network IPs
Version 3.2 introduced Team approvals and Team User approvals. This means that teams and users who are provisioned through SCIM can be immediately processed by the Automator service (instead of waiting for the admin to login to the console).
To activate this new feature:
Update your Automator container to the latest version
Use the automator edit
command in Keeper Commander to instruct the service to perform device approvals and also perform Team User approvals:
Example:
With the skill enabled, automator is triggered to approve team users when the user logs into their vault
When team creation is requested by the identity provider via SCIM messaging, the request is not fully processed until someone can generate an encryption key (to preserve Zero Knowledge). This is normally processed when an admin logs into the Keeper Admin Console.
When team approvals is activated on the Keeper Automator service, teams are now created automatically when any assigned user from the team logs in successfully to the Keeper Vault. Therefore, teams will not appear in the environment until at least one user from that team logs into their vault.
Teams will not appear in the environment until at least one user from that team logs into their vault.
This makes configuration easier when installing Automator in Azure Containers or other Docker-like containers where access to the settings file is difficult.
In Docker, Azure Containers, or other environments that use the docker-compose.yml
file, you can set environment variables in the docker compose file, for example:
After editing the docker-compose.yml
file, you will need to rebuild the container if the environment variables have changed. Just restarting the container will not incorporate the changes.
See this page for all of the new and advanced features / settings for the Automator service.