Only this pageAll pages
Powered by GitBook
1 of 62

SSO Connect Cloud

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Overview

High level overview of Keeper SSO Connect™ Cloud

End-to-End Password Protection Across Your Data Environment

Simply by authenticating through your existing IdP, your employees gain access to all of the capabilities of the top-rated Keeper password management platform, including:

  • Secure digital vault that can be accessed from any device, running any OS

  • Automatic password generation & autofill on all devices

  • Compatibility on any system, browser or app

  • Zero-knowledge encryption of vault data

This service does not require any on-premises or customer cloud-hosted services and there are no Master Passwords. Configuration is done directly between the IdP and Keeper's Admin Console.

To preserve Zero Knowledge, an Elliptic Curve public/private key pair is generated for each device. The private key on the device encrypts and decrypts the user's vault. Signing into a new device requires a key exchange that is processed by our Keeper Push feature or approved by a designated Admin. Automated admin approvals can be configured in several different ways.

Setup Steps

Important: SSO users and provisioning must exist in a dedicated node that you will create (not in the root node). Before completing these steps, create a new node as shown in the image below.

Keeper SSO Connect Cloud can be rolled out in 3 easy steps:

  1. Create a SSO Connect Cloud instance on the Keeper Admin Console under Provisioning

  2. Exchange metadata with your SAML identity provider

  3. Set up automated provisioning and/or manually provision users to Keeper

Device Approvals

An Administrative Permission called "Approve Devices" allows an Administrator to perform device approvals. Admin Approvals can also be automated. See the Device Approval section for details.

A unique "device" includes physical devices as well as browsers and browser profiles.

Benefits

From an administrator's perspective, the cost, risk & labor saving benefits of Keeper SSO Connect Cloud are significant:

  • Easy setup, all in one place in Keeper’s existing Admin Console.

  • No hosted software to integrate with the IdP

  • No additional server costs

  • No patching software

  • Eliminates a potential single point of failure

  • Available 24/7/365 on Keeper’s high availability systems

Keeper SSO Connect Cloud

Enhance and Extend Your SSO and Passwordless Solution

Keeper SSO Connect is a Cloud-based SAML 2.0 service that seamlessly and quickly integrates with your existing Single Sign-On and Passwordless solution - enhancing and extending it with zero-knowledge password management and encryption. Keeper supports all popular SSO IdP platforms such as Microsoft Entra ID / Azure, Okta, Google Workspace, Centrify, Duo, OneLogin, Ping Identity, JumpCloud and many more.

Product Page

https://www.keepersecurity.com/keeper-sso-connect.html

IdP Compatibility

Keeper SSO Connect, included with Keeper Enterprise, seamlessly integrates with all popular SSO IdP platforms.

Identity Provider Compatibility

In addition to SSO providers, Keeper also seamlessly integrates with all popular Passwordless authentication platforms that support SAML 2.0 including Duo, HYPR, Trusona, Octopus, Traitware and Veridium.

Why Keeper SSO Connect

Pairing your SSO solution with Keeper's secure password manager solves several major functional and security gaps.

Use Case

Keeper Password Manager

SSO Identity Provider

Password-Based Apps

✅

-

Shared Passwords & Secrets

✅

-

Encrypted Data Storage

✅

-

Social Media Sites

✅

-

Native Apps

✅

-

Offline Access

✅

-

SSH Keys

✅

-

Encrypted Private Files

✅

-

Zero-Knowledge Encryption

✅

-

SAML-Based Apps

✅ [via SSO Connect Cloud]

-

RSA SecurID Access

How to configure Keeper SSO Connect Cloud with RSA SecurID Access for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Keeper Security is RSA SecurID Access Certified.

RSA SecurID Access integrates RSA Authentication Manager and their Cloud Authentication Service. In this setup Cloud Authentication Service can be used as an identity provider in conjunction with Keeper SSO Connect. Detailed documentation is provided on the RSA website via the links below.

RSA SecurID Access Overview

Keeper Password Manager Integration Guides

Device Approvals

SSO Cloud device approval system

Overview

Device Approvals are a required component of the SSO Connect Cloud platform. Approvals can be performed by users, admins, or automatically using the Keeper Automator service.

For customers who authenticate with Keeper SSO Connect Cloud, device approval performs a key transfer, in which the user's encrypted data key is delivered to the device, which is then decrypted locally using their elliptic curve private key.

Technical Details

Keeper SSO Connect Cloud provides Zero-Knowledge encryption while retaining a seamless login experience with any SAML 2.0 identity provider.

When a user attempts to login on a device that has never been used prior, an Elliptic Curve private/public key pair is generated on the new device. After the user authenticates successfully from their identity provider, a key exchange must take place in order for the user to decrypt the vault on their new device. We call this "Device Approval".

Using Guest, Private or Incognito mode browser modes will identify itself to keeper as a new device each time it is launched, and therefore will require a new device approval.

To preserve Zero Knowledge and ensure that Keeper's servers do not have access to any encryption keys, we developed a Push-based approval system that can be performed by the user or the designated Administrator. Keeper also allows customer to host a service which performs the device approvals and key exchange automatically, without any user interaction.

Approval Methods

Device approval methods include the following:

  • Keeper Push (using push notifications) to existing user devices

  • Admin Approval via the Keeper Admin Console

  • Automatic approval via Keeper Automator service (preferred)

  • Semi-automated Admin Approval via Commander CLI

Migrate from OnPrem

How to migrate from Keeper SSO On-Prem to Cloud SSO

See the below instructions:

F5

How to configure Keeper SSO Connect Cloud with F5 BIG-IP APM for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

F5

On the F5 BIG-IP APM, configure a new SAML IdP service for your Keeper platform: Go to Access Policy -> SAML -> BIG-IP as IdP -> Local IdP services

Navigate to: Access Policy > SAML : BIG-IP as IdP - Local IdP Services. Select your applicable IdP connection point and "Export Metadata".

Import the Metadata file extracted from F5 BIG-IP APM into SSO Connect Cloud instance and select F5 as the IDP Type.

Select Save to save the configuration and verify all settings look correct. Export the Keeper SSO Connect Cloud Metadata file for configuration of F5 BIG-IP APM from the Export Metadata link.

Your Keeper SSO Connect setup is now complete!

SecureAuth

How to configure Keeper SSO Connect Cloud with SecureAuth for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

SecureAuth can be configured using the same instructions in the section. Please follow that guide in order to set up the SecureAuth environment.

For reference, use the SecureAuth guide located here:

A few additional important items to note regarding SecureAuth:

  • Ensure that "By Post" is selected in the Connection Type section:

  • Ensure to select "Sign SAML Assertion" and "Sign SAML Message".

  • Ensure the Entity ID of the IdP metadata matches the SAML response from SecureAuth.

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Passwordless Providers

Passwordless configuration for SSO Connect Cloud

The previous section of Admin Console Configuration applies to every SAML 2.0 compatible passwordless provider. To help with configuration of common passwordless providers, we have added some helpful screens in this next section.

Keeper is compatible with all SAML 2.0 SSO passwordless authentication products. You can just follow the step by step instructions of a similar provider in the list above, and it will be generally the same setup flow.

(If you create a setup guide for your identity provider, please share it with us and we'll post it here!)

Veridium

How to configure Keeper SSO Connect Cloud with Veridium for Passwordless login to Keeper.

Please complete the steps in the section first.

(1) Add a new service provider

From the Veridium interface, click on Add Service Provider.

(2) Download Keeper metadata

On the Keeper Admin Console, export the SAML Metadata file.

Go to View -> Export Metadata

(3) Upload Metadata to Veridium

In the service provider name box enter “Keeper”, upload the metadata file from Keeper and select “Email” as the NameID format.

(4) Map Attributes

Map the firstname, lastname and mail attributes and click “Save”.

Integration is now complete. A video demo of the Veridium login flow can be seen below:

Keeper Push

Keeper Push is a method of SSO device approval using existing devices

Overview

Users can approve their additional devices by using a previously approved device. For example, if you are logged into your web vault on your computer already, and logging into your phone app for the first time, you will get a device approval prompt on your web vault with the mobile device's information which you can approve or deny. Device Approvals perform an encryption key exchange that allow a user on a new device to decrypt their vault.

Keeper Push Method

Keeper Push is a method of approval that the user handles for themselves. Selecting "Keeper Push" will send a notification to the user's approved devices. For mobile and desktop apps, the push will show as a pop-up, and the user can simply accept the device approval.

Here's an example of Keeper Push approval using mobile as the approver device:

Steps to Using Keeper Push

(1) Select Keeper Push

(2) User waits for the push approval to appear on the device in which they are already logged in.

(3) User must be already logged into a different, previously approved device in order to receive the notification.

Admin Approval

Admins can approve end-user SSO Cloud devices

Admin Approval Method via Admin Console

Selecting "Admin Approval" will send the request to the Keeper Administrator with the "Approve Devices" permission. The Admin can perform the approval through the Admin Console "Approval Queue" screen or by being logged into the Admin Console at the time of the request.

(1) User selects "Admin Approval"

(2) User waits for approval or comes back later

(3) Administrator logs into the Admin Console and visits the Approval Queue

(4) Admin reviews the information and approves the device

Select the device to approve and then click "Approve". If the user is waiting, they will be instantly permitted to login. Otherwise, the user can login at a later time without any approval (as long as they don't clear out their web browser cache or reset the app).

Approve Devices Role Permission

A special role permission called "Approve Devices" provides a Keeper Administrator the ability to approve a device.

(1) Go to Roles within the root node or the SSO node

(2) Select the gear icon to control the Admin Permissions for the selected role.

(3) Assign "Approve Devices" permission

Now, any user added to this role is able to login to the Admin Console to perform device approvals.

As with any administrative permission, ensure least privilege

Ingress Requirements

Ingres configuration for Keeper Automator

Keeper Automator can be deployed many different ways - on prem, cloud or serverless.

Data Flow Diagram

Network Firewall Setup

In your firewall inbound traffic rules, set one of following rulesets:

For US Data Center Customers:

  • Inbound TCP port 443 from 54.208.20.102/32

  • Inbound TCP port 443 from 34.203.159.189/32

US / GovCloud Data Center Customers:

  • Inbound TCP port 443 from 18.252.135.74/32

  • Inbound TCP port 443 from 18.253.212.59/32

For EU / Dublin Data Center Customers:

  • Inbound TCP port 443 from 52.210.163.45/32

  • Inbound TCP port 443 from 54.246.185.95/32

For AU / Sydney Data Center Customers:

  • Inbound TCP port 443 from 3.106.40.41/32

  • Inbound TCP port 443 from 54.206.208.132/32

For CA / Canada Data Center Customers:

  • Inbound TCP port 443 from 35.182.216.11/32

  • Inbound TCP port 443 from 15.223.136.134/32

For JP / Tokyo Data Center Customers:

  • Inbound TCP port 443 from 54.150.11.204/32

  • Inbound TCP port 443 from 52.68.53.105/32

In addition, you may want to allow traffic from your office network (for the purpose of testing and health checks).

Make sure to create a rule for each IP address listed based on your Keeper geographic data center region.

System Architecture

SSO Connect Cloud System Architecture

Graphic Assets

Keeper Password Manager graphic assets for IdP configuration

This page contains various graphic assets of Keeper icons and logos for use in your identity provider, if required.

Zip file with various sizes:

512x512 PNG Square Icon:

256x256 PNG Square Icon:

128x128 PNG Square Icon:

Keeper Logo - JPG white background:

Keeper Logo - PNG transparent background:

Keeper Logo - JPG White on black background:

Traitware
Trusona
Veridium
Beyond Identity
Data Flow Diagram

Troubleshooting

Common issues and troubleshooting for your Automator service

Unable to communicate with the Automator service

There are several reasons why Keeper Commander is unable to communicate with your Automator service:

  • Ensure that the automator service is open to Keeper’s IP addresses. The list of IPs that must be open are found at this Ingress Requirements page. We recommend also adding your own IP address so that you can troubleshoot the connection.

  • If you are using a custom SSL certificate, ensure that the SSL certificate is loaded. Check the Automator log files which will indicate if the certificate is loaded by the service restart. If the IP address is open to you, you can run a health check on the command line using curl, for example: curl https://automator.mycompany.com/health

  • Check that the subject name of the certificate matches the FQDN.

  • Check that your SSL certificate includes the CA intermediate certificate chain. This is the most common issue that causes a problem. Keeper will refuse to connect to Automator if the intermediate certificate chain is missing. You can do this using openssl like the following:

openssl s_client -showcerts -servername automator.company.com -connect automator.company.com

This command will clearly show you the number of certificates in the chain. If there's only a single cert, this means you did not load the full chain. To resolve this, see Step 4 of the Custom SSL Certificate step by step instructions page.

400 Error in Health Checks

This may occur if the healthcheck request URI does not match the SSL certificate domain. To allow the healthcheck to complete, you need to disable SNI checks on the service. This can be accomplished by setting the disable_sni_check=true in the Automator configuration or passing in the environmental variable DISABLE_SNI_CHECK with the value of "true".

Links & Resources

  • Migration from On-Prem to SSO Cloud

  • SSO Certificate Renewal

  • Keeper Security Website

  • Keeper Enterprise Guide

  • SSO Connect Product Page

  • ​Release Notes

Admin Console Configuration
Admin Console Configuration
Other SAML 2.0 Providers
can be found here
Connection Type
Initially select 'Enterprise SSO Login'
Admin Console Configuration
Add New Service Provider
View Configuration
Export Metadata
Upload Metadata
Map Attributes
https://keeper-email-images.s3.amazonaws.com/common/512x512_icon.png
24KB
512x512_icon.png
image
Open
512x512 png icon
https://keeper-email-images.s3.amazonaws.com/common/256x256_icon.png
21KB
256x256_icon.png
image
Open
256x256 png icon
https://keeper-email-images.s3.amazonaws.com/common/128x128_icon.png
19KB
128x128_icon.png
image
Open
128x128 png icon
https://keeper-email-images.s3.amazonaws.com/common/keeper-no-tag.jpg
42KB
keeper-no-tag.jpg
image
Open
https://keeper-email-images.s3.amazonaws.com/common/keeper-no-tag.png
10KB
keeper-no-tag.png
image
Open
https://keeper-email-images.s3.amazonaws.com/common/keeper-header-logo-short.jpg
23KB
keeper-header-logo-short.jpg
image
Open
Keeper Push
Keeper Push
User Push Approval
Admin Approval
Admin Approval
Approval Queue
Device Approval Screen
Keeper SSO Connect Cloud - System Architecture

Logout Configuration

Configuration options for Single Logout (SLO) between Keeper and IdP

Different customers may have different desired behavior when a user clicks on "Logout" from their Keeper vault. There are two choices:

Option 1: Don't logout from IdP

  • Clicking "Logout" from the Keeper Vault will just close the vault but stay logged into the Identity Provider.

  • Logging into the Keeper Vault again will defer to the Identity Provider's login logic. For example, Okta has configurable Sign-On rules which allow you to prompt the user for their MFA code before entering the application. See your identity provider sign-on logic to determine the best experience for your users.

Option 2: Logout from IdP

  • Clicking "Logout" from the Keeper Vault will also logout the user from the Identity Provider.

  • This may create some frustration with users because they will also logout from any IdP-connected services.

  • Users would need to be directed to simply close the vault and not click "Logout" if they don't like this behavior.

How to Enable Single Logout (SLO)

  • If your identity provider has a "Single Logout" option, then you can turn this feature ON from the identity provider configuration screen. For example, Okta has a "Single Logout" checkbox and they require that the "Keeper SP Certificate" is uploaded. After changing this setting, you will need to export the metadata from the IdP and import it back into the Keeper SSO configuration screen.

How to Disable Single Logout (SLO)

  • If your identity provider has a "Single Logout" option, then you can turn this feature OFF from the identity provider configuration screen and upload the new metadata file into Keeper.

  • If the IdP does not have a configuration screen on their user interface, you can just manually edit the IdP metadata file (screenshot below). In a text editor or vim, remove the lines highlighted below that represent the SLO values. Then save the file and upload the metadata into the Keeper SSO configuration screen.

Deleting the SingleLogoutService Field from Metadata

SSO Identity Providers

Identity Provider configuration for SSO Connect Cloud

The previous section of Admin Console Configuration applies to every SAML 2.0 compatible identity provider. To help with any IdP-specific configuration of common identity providers, we have added some helpful screens in this next section.

If your Identity Provider is not listed here, don't worry. Keeper is 100% compatible with all SAML 2.0 SSO identity providers and Passwordless authentication products. You can just follow the step by step instructions of a similar provider in the list above, and it will be generally the same setup flow.

(If you create a setup guide for your identity provider, please share it with us and we'll post it here!)

Keeper Automator Service

Automatic device approval service for SSO Connect Cloud environments

Overview

The Keeper Automator is a self-hosted service which performs cryptographic operations including device approvals, team approvals and team user assignments.

Once Automator is running, users can seamlessly access Keeper on a new (not previously approved) device after a successful authentication with your identity provider, without any further approval steps. Without the Automator service, users and admins can still perform manual device approvals through Push Approval methods.

Keeper Automator is a lightweight service that can be deployed in your cloud or on-prem environment.

Why is this needed?

Keeper SSO Connect provides seamless authentication into the Keeper vault using your identity provider. Normally a user must a approve their new device, or an Admin can approve a new device for a user. The Automator service is totally optional, created for Admins who want to remove any friction associated with device approvals.

To preserve Zero Knowledge and automate the transfer of the Encrypted Data Key (EDK) to the user's device, a service must be run which is operated by the Enterprise (instead of hosted by Keeper). The service can be run several different ways, either in the cloud or self-hosted.

An in-depth explanation of SSO Connect encryption model is

Installation Options

Depending on your environment, select from one of the following installation methods. The , , and Google are the best choices if you use one of these cloud services.

Installation Method: Azure Container App

Installation Method: Azure App Services

Installation Method: Azure App Gateway

Installation Method: AWS Elastic Container Service

Installation Method: AWS Elastic Container Service with KSM

Installation Method: Google Cloud with GCP Cloud Run

Installation Method: Standalone Java

Installation Method: Docker

Installation Method: Docker Compose

Installation Method: Kubernetes

Installation Method: Windows Service


Automator Security

Using the Automator service creates a frictionless experience for users, however it requires that you have fully secured your identity provider.

Please refer to our guide to securing your Keeper environment.

Version 17.0 Overview

Instructions for upgrading your Automator instance to v17.0

Overview

Version v17.0+ incorporated several new features:

  • Team Approvals (Team Creation)

  • Team User Approvals (Assigning Users to Teams)

  • All settings can be configured as environment variables

  • Support for simplified deployment

  • Support for simplified deployment

  • HSTS is enabled for improved HTTPS security

  • IP address filtering for device approval and team approval

  • Optional rate limiting for all APIs

  • Optional filtering by email domain

  • Optional binding to specific network IPs

Team User approvals

Teams and users who are provisioned through SCIM can be immediately processed by the Automator service (instead of waiting for the admin to login to the console).

To activate this new feature:

  • Update your Automator container or .zip file to the latest version

  • Use the automator edit command in Keeper Commander to instruct the service to perform device approvals and also perform Team User approvals:

Example:

With the skill enabled, automator is triggered to approve team users when the user logs into their vault

Team Approvals

When team creation is requested by the identity provider via SCIM messaging, the request is not fully processed until someone can generate an encryption key (to preserve Zero Knowledge). This is normally processed when an admin logs into the Keeper Admin Console.

When team approvals is activated on the Keeper Automator service, teams are now created automatically when any assigned user from the team logs in successfully to the Keeper Vault. Therefore, teams will not appear in the environment until at least one user from that team logs into their vault.

Teams will not appear in the environment until at least one user from that team logs into their vault.

All settings can be configured as environment variables

This makes configuration easier when installing Automator in Azure Containers or other Docker-like containers where access to the settings file is difficult.

In Docker, Azure Containers, or other environments that use the docker-compose.yml file, you can set environment variables in the docker compose file, for example:

After editing the docker-compose.yml file, you will need to rebuild the container if the environment variables have changed. Just restarting the container will not incorporate the changes.

Advanced Features

for all of the new and advanced features / settings for the Automator service.

automator edit --skill=team --skill=team_for_user --skill=device "My Automator"
automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"
services:
  automator:
    container_name: "az-autodock"
    environment:
      - AUTOMATOR_PORT=8090
      - AUTOMATOR_HOST=10.0.0.4
      - DISABLE_SNI_CHECK=true
Azure Container App
AWS ECS Service
See this page
Microsoft ADFS
Amazon AWS
Auth0
Entra ID (Azure AD)
Centrify
Duo SSO
F5
Google Workspace
JumpCloud
Okta
OneLogin
Ping Identity
PingOne
Rippling
RSA SecurID Access
SecureAuth
Shibboleth
HENNGE
CloudGate UNO
Other SAML 2.0 Providers
documented here.
Azure Container App
Azure App Services
AWS Elastic Container Service
Cloud with GCP Cloud Run
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
View Instructions
Recommended Security Settings

Auth0

How to configure Keeper SSO Connect Cloud with Auth0 for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Auth0 SSO Configuration

Login to the Admin section of the Auth0 portal.

Select the Applications tab and click Create Application. Choose Regular Web Applications.

Applications > Create Application > Regular Web Applications

Next, go to the Addons tab and click SAML2 WEB APP.

Addons > SAML2 WEB APP

On the Settings page that comes up next, you will need the “Assertion Consumer Service (ACS) Endpoint” that comes from the Keeper Admin Console.

Example Assertion Consumer Service (ACS) Endpoint: https://keepersecurity.com/api/rest/sso/saml/XXXXXXXX

This value can be found under the SSO Connect Cloud configuration as part of the Service Provider information, as seen below:

View Configuration
Copy the Assertion Consumer Service (ACS) Endpoint

Paste the Assertion Consumer Service (ACS) Endpoint into the Application Callback URL field in the Auth0 screen.

Next, remove the sample JSON in the SAML2 Web App editor window, and replace with the following:

{
  "audience": "https://keepersecurity.eu/api/rest/sso/saml/XXXXX",
  "mappings": {
    "email": "Email",
    "given_name": "First",
    "family_name": "Last"
  },
  "createUpnClaim": false,
  "passthroughClaimsWithNoMapping": false,
  "mapUnknownClaimsAsIs": false,
  "mapIdentities": false,
  "nameIdentifierFormat": "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress",
  "nameIdentifierProbes": [
    "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
  ]
}

The value for “audience” is the Entity ID. This can also be found under the SSO Connect Cloud configuration as part of the Service Provider information:

Copy the IDP Initiated Login Endpoint

Once you've added the Entity ID, you can click the Debug button to verify there are no formatting issues.

Next, scroll down to the bottom of the SAML2 Web App window and click Save.

Save changes made to the SAML2 Web App settings

Next, click on the Usage tab and download the Identity Provider Metadata file.

Download IdP metadata

On the Keeper side, edit the SSO configuration and select GENERIC as the IDP Type. You can upload the metadata.xml file into the Keeper SSO Connect interface by browsing to or dragging and dropping the file into the Setup screen:

Edit the SSO Configuration
Drag and Drop the Metadata File you downloaded from Auth0 into Keeper

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication. They won't have to enter the Enterprise Domain.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

DUO SSO

How to configure Keeper SSO Connect Cloud with DUO SSO for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Duo Setup

These instructions assume Duo has already been successfully enabled and configured with an authentication source (Active Directory or IdP). To activate Duo SSO, visit your Duo Admin Panel and visit the "Single Sign-On" section.

Step 1: DUO SSO Configuration

Log in to the Duo Admin Panel and click Protect an Application. Search for Keeper and choose Keeper Security with type "2FA with SSO hosted by Duo (Single Sign-On)" in the applications list then click "Protect" (shown below as Configure).

Protect Keeper Security SSO Type

Step 2: Metadata

The Download section is where you can download the SAML metadata file to upload into your SSO provisioning method.

Download DUO Metadata file

Back on the Keeper Admin console, locate your DUO SSO Connect Cloud Provisioning method and select Edit.

Edit DUO SSO Provisioning Method

Scroll down to the Identity Provider section, set IDP Type to DUO SSO, select Browse Files and select the DUO Metadata file previously downloaded.

Still within the Keeper Admin Console, exit Edit View and select View on your DUO SSO Connect Cloud Provisioning method. Within the Service Provider section you will find the metadata values for the Entity ID, IDP Initiated Login Endpoint and Assertion Consumer Service (ACS) Endpoint.

Single Logout Service (SLO) Endpoint is optional.

View DUO SSO Provisioning Method

Return to the application page in your Duo Admin Panel, copy and Paste the Entity ID, Login Endpoint and ACS Endpoint into the Service Provider section.

Keeper Metadata Info

Step 3: Map User Attributes

Within the SAML Response section, scroll down to Map attributes and map the following attributes.

Ensure that 3 attributes ("First", "Last" and "Email") are configured with the exact spelling as seen below.

User Attributes

Step 4: Policy (optional)

Within the Policy section, defines when and how users will authenticate when accessing this application. Your global policy always applies, but you can override its rules with custom policies.

User or Group Policy

Step 5: Global Policy

Within the Global Policy section, Review / Edit / Verify any Global Policy as seen by your DUO and or Keeper administrator.

Success! Your Keeper Security EPM - Single Sign-On setup is now complete!

Troubleshooting

If you need assistance implementing the Keeper Security EPM - Single Sign-On application within your DUO environment, please contact the Keeper support team.

Moving Existing Users to Duo SSO

Users created in the root node (top level) in the Keeper Admin Console will need to be moved to the SSO node if you want the users to login with Duo. An admin cannot move themselves to the SSO enabled node, another admin must perform this action.

After the user is moved to the SSO enabled node, they can login to the Keeper vault by simply typing their email address and clicking "Next". If this does not work, please ensure that your email domain (e.g. company.com) has been reserved to your enterprise and ensure that Just-In-Time provisioning is enabled.

To onboard with the Enterprise Domain, the user can select the "Enterprise SSO" pull down and type in the Enterprise Domain configured in the Keeper Admin Console.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO for the first time, they only need to use their email address next time to initiate SSO authentication.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

OneLogin

How to configure Keeper SSO Connect Cloud with OneLogin for seamless and secure SAML 2.0 authentication and SCIM provisioning.

Please complete the steps in the Admin Console Configuration section first.

OneLogin Setup:

  1. Login to the OneLogin portal.

Log into OneLogin.

2. Select Administration to enter the admin section.

3. From the onelogin menu select Applications then Add App.

In the Search field, do a search for Keeper Password Manager and select it from the search result.

Add Keeper Password Manager

4. On the Add Keeper Manager screen click Save.

5. The next step is to download the SAML Metadata from OneLogin. Select the down arrow on the MORE ACTIONS button and select SAML Metadata.

Save SAML Metadata

Drag and drop or browse to this saved file on the SAML Metadata Section of the Single Sign-On with SSO Connect™ Cloud section on the Keeper Admin Console.

Upload Metadata

6. On the Keeper Admin Console, copy the Assertion Consumer Service (ACS) Endpoint field.

7. Back on the OneLogin Configuration tab, paste in the Keeper SSO Connect Assertion Consumer Service (ACS) Endpoint field and then click Save.

Paste Assertion Consumer Service Endpoint

8. If SCIM is desired then go back on the Keeper Provisioning tab, click on "Add Method" and select SCIM. If not skip to step to step 12.

Add SCIM Method

9. Click Generate then copy the URL and Token.

Click Generate

10. Paste the "URL" into the SCIM Base URL, and paste the "Token" into the SCIM Bearer Token.

11. On the Keeper Admin Console make sure to Save the SCIM token.

For more detailed configuration of SCIM visit the User and Team Provisioning section in the Enterprise Guide

12. Click Save and the integration is complete.

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

Ping Identity (PingOne)

How to configure Keeper SSO Connect Cloud with Ping Identity for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Ping Identity Configuration

Login to the Ping Identity portal.

In your existing Environment click Manage Environment.

On the left, click Applications > Application Catalog > Search "Keeper" and select "Keeper Password Manager"

On the Application Details page, add the following data:

  • Keeper Security Domain: keepersecurity.com

  • Keeper Security Identifier: Can be found in the admin console under Entity ID https://keepersecurity.com/api/rest/sso/saml/<Identifier>

    • Log in to the Keeper Admin Console at https://keepersecurity.com/console/.

    • In the left panel, go to Admin and select a sub-node (not the root).

    • Navigate to Provisioning → Add Method → Single Sign-On with SSO Connect® Cloud.

    • Enter a configuration name, add your domain, and click Save.

    • After the SSO configuration is created, click the three-dot menu (⋮) next to it and select View to display the Entity ID.

  • Once that is complete we can save and move on to the next steps.

Next, we can add the Groups who will be accessing the Keeper Application and click Save.

Click Download Metadata as we will upload this to Keeper in the next step.

On the Edit screen of the Keeper SSO Connect Cloud provisioning select "Generic" as the IDP Type and upload the saml2-metadata-idp xml file into the Keeper SSO Connect interface by browsing to or dragging and dropping the file into the Setup screen:

The Keeper Application should be added and enabled.

Your Keeper SSO Connect setup is now complete!

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

PingOne

How to configure Keeper SSO Connect Cloud with PingOne for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first. Legacy Ping Identity users who are not on PingOne should view our Ping Identity documentation.

PingOne

Login to the PingOne portal at https://admin.pingone.com/.

Login to PingOne

From the PingOne console menu, select Applications > Application Catalog

Search "Keeper" and click on the "Keeper Password Manager - Cloud SSO" link to add the Keeper Password Manager application

Click Setup to proceed to the next step

Click "Continue to Next Step"

From the Keeper Admin Console, view the PingOne SSO Connect Cloud entry and click Export Metadata and save it in a safe location for future use. Also click Export SP Cert and save it in a safe location for future use.

From the PingOne Admin Console, click Select File next to "Upload Metadata" and browse to the saved metadata file from the Keeper Admin Console. This should populate the "ACS URL" and "Entity ID" fields with the proper datapoints.

Click on Choose File next to "Primary Verification Certificate" and browse to the saved .crt file from the Keeper Admin Console. Click on the checkbox next to "Encrypt Assertion" and then click Choose File next to "Encryption Certificate". Browse to the same saved .crt file from the Keeper Admin Console.

Validate the certificate and click "Continue to Next Step".

Enter the appropriate values associated with each attribute (see below image) and click Continue to Next Step

Modify the Name to appropriately match the Configuration Name of the SSO node from the Keeper Admin Console. Click Continue to Next Step

You may choose to add PingOne user groups to your application. Click Add next to the group or groups you would like to add and click Continue to Next Step.

PingOne users will have access to Keeper Password Manager by default. Assigning groups to Keeper Password Manager restricts access to only those groups.

Click Download next to "SAML Metadata" and save the .xml file to a safe location.

Click Finish to complete the application setup wizard.

On the Edit Configuration screen of the Keeper SSO Connect Cloud provisioning in the Keeper Admin Console, select PingOne as the IDP Type.

Upload the SAML Metadata file downloaded in the previous step into the Keeper SSO Connect interface by browsing to or dragging and dropping the file into the SAML Metadata section.

Upload PingOne Metadata to Keeper

The PingOne Keeper SSO Connect Cloud™ entry will now show as Active.

View Active Keeper SSO Connect Entry

Your PingOne Keeper SSO Connect Cloud™ setup is complete!

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

Rippling

How to configure Keeper SSO Connect Cloud with Rippling for seamless and secure SAML 2.0 authentication and SCIM provisioning.

Please complete the steps in the Admin Console Configuration section first.

Rippling Setup

  1. Login to the Rippling admin account.

2. After logging in, on the left side hover over Home and click App Shop in the bottom left.

3. In the App Shop, search for Keeper in the upper left corner and select it from the search result.

4. After selecting clicking on the Keeper app, click Connect Account to get started with SSO.

5. Rippling has it's own SSO set up walkthrough, continue the walkthrough to set up SSO.

Save SAML Metadata

6. Once you have reached this page, the SSO setup is complete, however there is also an option for SCIM provisioning. If you would like SCIM provisioning, select Continue with API and follow the SCIM provisioning walkthrough. Otherwise, click Skip for now, visit app.

You can assign users to the application and designate who has access to keeper in your Rippling environment here.

For more detailed configuration of SCIM visit the User and Team Provisioning section in the Enterprise Guide

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

Multi-Tenant Mode

Setting up multiple tenants in a single Automator instance

Overview

Keeper Automator supports a multi-tenant configuration, so that a single deployment can perform automations for multiple Keeper Enterprise environments.

  • For MSP environments, a single Keeper Automator instance can be used to run multiple Managed Companies.

  • For Enterprise customers, a single instance can process approvals for any number of identity providers.

Once the server is running, you can use it for multiple SSO nodes, even in different enterprises.

MSP with Multiple Managed Companies

The steps for activating one Automator instance for multiple Managed Companies is below:

(1) Login to Commander as the MSP Admin

My Vault> login [email protected]

(2) Switch context to the Managed Company

My Vault> msp-info

MSP Plans and Licenses
-----------------------
  #  Plan Id           Available Licenses    Total Licenses    Stash
---  --------------  --------------------  ----------------  -------
  1  business                          83               100        0
  2  businessPlus                      50               100        0
  3  enterprise                        80               100        0
  4  enterprisePlus                    85               100        0

  #      ID  Name                     Plan              Allocated    Active
---  ------  -----------------------  --------------  -----------  --------
  1   81386  Demo Managed Co. LLC     enterprisePlus            5         0
  2   81344  Elite Auto Works         business                  5         4
  3  114391  John's Garage            enterprisePlus            5         0
  4  114392  John's Garages           enterprisePlus            5         0
  5   81345  Perfect Teeth Dental     businessPlus             50         4
  6  114281  Test                     business                 12         0
  7   81346  Troy Financial Services  enterprise               20         0

Find the MC you want to set up, select the ID and then type:

switch-to-mc <ID>

(3) Create an Automator instance

Use the common Automator URL in the "edit" step

For example:

My Vault> automator create --name="Tenant1" --node="SSO Node"
My Vault> automator edit --url=https://my.company.com:8089 --skill=team_for_user --skill=device <Automator ID>
My Vault> automator setup <Automator ID>
My Vault> automator init <Automator ID>
My Vault> automator enable <Automator ID>

(4) Switch back to MSP

Switch back to the MSP Admin context

My Vault> switch-to-msp

For each Managed Company, repeat the above 4 steps.

Multi-Tenant Enterprise

The steps for activating one Automator instance for multiple Nodes in the same Enterprise tenant is below:

(1) Login to Commander as Admin

My Vault> login [email protected]

(2) Create the Automator Instance

For each Node, use the same "edit" URL. For example:

My Vault> automator create --name="Tenant A" --node="<node_name>"
My Vault> automator edit --url=https://my.company.com:8089 --skill=team --skill=team_for_user --skill=device <Automator ID>
My Vault> automator setup <Automator ID>
My Vault> automator init <Automator ID>
My Vault> automator enable <Automator ID>

Then, simply set up another instance with the same URL endpoint:

My Vault> automator create --name="Tenant B" --node="Azure"
My Vault> automator edit --url=https://my.company.com:8089 --skill=team --skill=team_for_user --skill=device <Automator ID>
My Vault> automator setup <Automator ID>
My Vault> automator init <Automator ID>
My Vault> automator enable <Automator ID>

Note that they have different names and IDs and are assigned to different nodes but they use the same URL.

Repeat step (2) for every node to set up multiple tenants on the same Automator instance.

Admin Console Configuration

Configuration of the Admin Console with Keeper SSO Connect Cloud™

Configuration of Keeper SSO Connect Cloud™ is very simple and should only take a few minutes if you've previously configured other service providers with your IdP. Please follow the general steps in this document.

Step 1. Visit the Keeper Admin Console and login as the Keeper Administrator. US: EU: AU: CA: JP: US GovCloud:

Cloud SSO integration can only be applied to a node beneath the root node. Make sure to create a node for provisioning users and policies per the instructions below.

Step 2. After logging in, click on the Admin menu and select Add Node. This new node is where your SSO users will be provisioned.

Step 3. On the new node, select the Provisioning tab and click Add Method

Step 4. Select Single Sign-On with SSO Connect Cloud™ and click Next

Step 5. Enter the Configuration Name and Enterprise Domain. The Configuration Name will not be seen by your end users and allows you to manage multiple configurations. The Enterprise Domain will be used for logging in, therefore we recommend selecting a name that is unique and easy to remember.

  • Configuration Name: Internal use only, your users will not see this.

  • Enterprise Domain: Users will type in this name when authenticating in certain flows. It can be your domain name or any unique string.

  • Enable Just-In-Time Provisioning: To allow users the ability to self-onboard to your Keeper enterprise tenant, enable the Just-in-Time provisioning feature. This is enabled by default. Just-In-Time Provisioning also allows new users with your domain name to automatically route to the SSO provider if the domain has been . If you are planning to use the Keeper Bridge for provisioning users instead of Just-In-Time SSO provisioning, please leave this option OFF.

Step 6. Click Save to proceed to the next step. Keeper will automatically open the "Edit Configuration" screen next.

Step 7. From the Edit Configuration screen, select your IdP (or "Generic"), upload the metadata file from your identity provider into Keeper and set up the 3 required attribute mappings. Note that Keeper works with any SAML 2.0 compatible identity provider.

There are a couple of additional options available here:

  • Enable IsPassive: We recommend leaving this off unless required by your IdP.

  • ForceAuthn: For customers who want to force a new SSO login session on every Keeper Vault login, turn this on.

  • Identity Provider: To assist you with the configuration of common identity providers, there is a drop-down "IDP Type" which allows you to select pre-defined setups. If your identity provider is not listed, please select "GENERIC".

  • SAML Metadata: Drag and drop the IdP Metadata file provided by your IdP into the Keeper configuration screen. This critical file provides Keeper with the URL endpoint and digital certificate to validate signed assertions.

  • Identity Provider Attribute Mappings: Keeper expects First Name, Last Name and Email to be called "First", "Last" and "Email" by default, but this can be changed. Make sure your identity provider is mapping to the field names on this screen exactly as written (case sensitive!).

  • Single Sign On Endpoint Preferences: This is advanced configuration and defaults to "Prefer HTTP post".

Step 8. At some point during your configuration with the IdP, you'll need to enter a few parameters from Keeper such as "Entity ID" and "ACS URL". This information is available on the "View Configuration" screen. You can get here by going back then clicking on "View".

Make note of the URLs that are provided on this screen that you may need to set within your identity provider.

  • Entity ID: This can be referred to as "SP Entity ID", or "Issuer". It's basically a unique identifier that must be known by both sides. Often times, the Entity ID is the same as the ACS URL endpoint.

  • Assertion Consumer Service Endpoint ("ACS URL"): This is the URL endpoint at Keeper to which your identity provider will send users after they authenticate. The data sent to Keeper will include a signed assertion that indicates if the user has successfully signed into the identity provider. The assertion is signed with the identity provider's private key. Keeper validates the signature with the identity provider's public key, which is provided in the IdP metadata file.

  • Single Logout Service Endpoint ("SLO"): This is the URL endpoint at Keeper to which your identity provider will send logout requests. Single Logout is optional and this is something you configure at your identity provider.

This information is also available in the Keeper XML metadata file which can be optionally downloaded by clicking "Export Metadata". Upload the metadata file to your identity provider if required.

Domain Reservation and Just-In-Time provisioning

If Just In Time provisioning is enabled, you can automatically route users to the identity provider when the user types in their email and clicks "Next" from the Vault login screen. This applies to all devices including Web Vault, Desktop App, Browser Extensions, iOS and Android apps.

Keeper maintains a list of "personal" domains, for example gmail.com and yahoo.com which cannot be reserved and allow the general public to create Keeper accounts with those domains, with a verified email.

If you would like to allow end-users to create personal or Enterprise accounts with your reserved domain outside of your enterprise tenant, please contact the Keeper support team and we can unlock this domain for you.

After configuring Keeper SSO Connect Cloud on the Admin Console the next step is to setup up the application in the Identity Provider. See the section.

HENNGE

How to configure Keeper SSO Connect Cloud with HENNGE for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

HENNGE SSO Configuration

(1) Log into the HENNGE Administrator console.

Click the Administration tile on the menu.

(2) Next, Select the Connected Services menu item and click Add Service.

On the "Add New Service" page, Click the Add Service Manually at "Add Service for SSO" menu.

(3) Set the Service name to “Keeper Password Manager and Digital Vault” or whatever you prefer, and Add the Attributes Email claim with the value "UsePrincipleName (UPN)", then Click the Submit button.

In your environment, if your user.userprincipalname (UPN) is not the same as the users actual email address, you can edit the Email claim and change it to user.mail as the value for the Email attribute.

Now you can see all values required for Keeper side configuration at Step (5). Click X at the right up and Leave this page for now.

On the Connected Services menu area, Click the Service Name you created and then click the "Upload Using Metadata" button.

The Keeper metadata is available on the admin console. Go to the provisioning instance -> View -> Export Metadata

(4) After the metadata has been uploaded, head back to the HENNGE Connected Service configuration page and input the Login URL as such https://keepersecurity.com/api/rest/sso/ext_login/<YourSSOIdHere>.

Your SSO ID can be found at the end of your SP Entity ID. Ex: https://keepersecurity.com/api/rest/sso/saml/3534758084794

Complete the configuration by scrolling to the bottom of the page and select the Save Changes button.

(5) Last step is to export the metadata from this connector to import it into the Keeper SSO Connect Cloud™.

Set the IDP Type to GENERIC and upload this file into the Keeper SSO Connect Cloud™ provisioning interface by dragging and dropping the file into the edit screen:

Assign Users

From HENNGE, you can now add users at Access Policy section on the User list page, or groups at Allowed services section on Access Policy Groups page.

Your Keeper SSO Connect setup is now complete!

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

JumpCloud

How to configure Keeper SSO Connect Cloud with JumpCloud for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

JumpCloud

(1) Log into the JumpCloud Administrator console.

Select the SSO tab on the side menu.

(2) Next, select the + icon in the upper left corner.

On the "Get Started with SSO Application page, search for Keeper in the search bar. Select Configure on the Keeper Application.

(3) Next, on Keeper Application connector page, General Info section set the Display Label: Keeper Security Password Manager

On the Single Sign-On Configuration area, click the "Upload Metadata" button.

The Keeper metadata is available on the admin console. Go to the provisioning instance -> View -> Export Metadata

(4) After the metadata has been uploaded, head back to the JumpCloud SSO configuration page and input the Login URL as such https://keepersecurity.com/api/rest/sso/ext_login/<YourSSOIdHere>.

Your SSO ID can be found at the end of your SP Entity ID. Ex: https://keepersecurity.com/api/rest/sso/saml/459561502469

Complete the configuration by scrolling to the bottom of the page and select the activate button.

(5) Last step is to export the metadata from this connector to import it into the Keeper SSO Connect Cloud™.

Set the IDP Type to GENERIC and upload this file into the Keeper SSO Connect Cloud™ provisioning interface by dragging and dropping the file into the edit screen:

Your Keeper SSO Connect setup is now complete!

User Provisioning SSO+SCIM

JumpCloud® supports Automated User and Team Provisioning with SCIM (System for Cross Domain Identity Management) which will update and deactivate Keeper user accounts as changes are made in JumpCloud®. Step-by-Step instructions can be found here,

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Okta

How to configure Keeper SSO Connect Cloud with Okta for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

Okta SSO Configuration

Login to the Admin section of the Okta portal.

Select the Applications menu item and click Browse App Catalog.

Search for “Keeper Password Manager”, and then select the Add button for the Keeper Password Manager and Digital Vault application.

On the General Settings page that comes up next, you need the "Entity ID" that comes from the Keeper Admin Console.

Example Server Base URL: https://keepersecurity.com/api/rest/sso/saml/XXXXXXXX

The value for XXXXXXXX represents the specific SSO Connect instance associated with your enterprise and can be found on the Admin Console SSO configuration as part of the Service Provider information, as seen below:

Paste the Entity ID into the Server Base URL field in the Okta screen.

Select the Sign On tab.

Scroll down to the SAML Signing Certificates configuration section, and select Actions > View IdP metadata.

Save the resulting XML file to your computer. In Chrome, Edge and Firefox, select File > Save Page As... and save the metadata.xml file.

In the Keeper Admin Console, Edit the SSO configuration then Select OKTA as the IDP Type and upload the metadata.xml file into the Keeper SSO Connect interface by browsing to or dragging and dropping the file into the Setup screen:

(Optional) Enable Single Logout

If you would like to enable the Single Logout feature in Okta, go to the Sign On tab and click Edit. Click the Enable Single Logout checkbox and then upload the SP Cert which comes from the Keeper Admin Console.

To first download the SP Cert, view the SSO configuration on Keeper and click the Export SP Cert button.

Upload the SP cert file and be sure to click Save to save the Sign On settings in Okta.

If you have changed the Single Logout Setting, you'll have to download the latest Okta metadata file once again, and upload the new metadata.xml file into Keeper on the SSO edit screen.

From the Actions menu, select View IdP metadata.

Save the resulting XML file to your computer. In Chrome, Edge and Firefox, select File > Save Page As... and save the metadata.xml file.

In the Keeper Admin Console, Edit the SSO configuration then upload the new metadata.xml file into the Keeper SSO Connect interface by browsing to or dragging and dropping the file into the Setup screen.

Okta SCIM Provisioning

To enable Okta SCIM user and group provisioning please follow the instructions found within the Keeper Enterprise Guide:

Assign Users

From Okta, you can now add users or groups on the Assignments page. If you have activated SCIM provisioning per the instructions then the user will be instantly provisioned to Keeper.

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin cannot move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

User Provisioning

Instructions on how to provision users with SSO Connect Cloud

Onboarding Users

There are several options for onboarding users who inside an SSO-provisioned node:

Option 1: Using SCIM Automated Provisioning

  • If your identity provider supports Automated Provisioning (using the SCIM protocol), users will be automatically provisioned with a Keeper Vault.

  • Follow our guide for instructions on setting up SCIM with your identity provider, if you haven't done this.

  • Users who are provisioned through SCIM can simply type in their Email Address on the Vault Login screen and they will be automatically directed to the IdP login screen to complete the sign-in. Please makes sure that your email domain is reserved with Keeper so that we route your users to the IdP. to learn about domain reservation.

  • After authentication to the IdP, the user will instantly be logged into their Vault on their first device. Subsequent devices will require Device Approval.

Option 2: Using Just-In-Time (JIT) Provisioning

If Just-In-Time (JIT) provisioning is activated on your SSO configuration, there are a few ways that users can access their vault:

(1) Direct your users to the identity provider dashboard to click on the Keeper icon (IdP-initiated Login).

(2) Provide users with a hyperlink to the Keeper application within the identity provider (see your IdP Application configuration screen for the correct URL).

(3) Send users to the Keeper Vault to simply enter their email address and click "Next".

Ensure that your domain has been for automatic routing.

(4) If using the email address is not desired, users can also click on "Enterprise SSO Login" using the "Enterprise Domain" that you configured in the Admin Console for the SSO connection.

(5) Hyperlink users directly to the Enterprise Domain login screen on Keeper using the below format:

  • Replace <domain> with the endpoint of the data center where your Keeper tenant is hosted. This can be one of the following:

    • keepersecurity.com

    • keepersecurity.eu

    • keepersecurity.com.au

    • govcloud.keepersecurity.us

    • keepersecurity.jp

    • keepersecurity.ca

  • Replace <name> with the name of the Enterprise Domain that has been assigned in the Admin Console.

Option 3: Manually Inviting Users

If you prefer to manually invite users from the Admin Console instead of using Just-In-Time provisioning, follow these steps:

  • Login to the Keeper Admin Console

  • Open the node which is configured with your identity provider

  • Click on "Add Users" to invite the user manually

  • User can then simply type in their email from the Vault login screen to sign in

Note: Additional customization of the Email Invitation including graphics and content can be made by visiting the "Configuration" screen of the Admin Console.

Please make sure to test the configuration and onboarding process with non-admin test user accounts prior to deploying Keeper to real users in the organization.

Please don't use SSO with your Keeper Administrator account for testing. We recommend that the Keeper Administrator exists at the root node of the Admin Console and uses Master Password login. This ensures you can always have access to manage your users if the identity provider is unavailable (e.g. if Microsoft goes down).

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires a another admin to perform this action.

Certificate Renewal

Keeper SSO Connect certificate renewal instructions

Keeper SSO Connect Certificate Renewal Process

It is critical to ensure that your IdP SAML Signing Certificates are renewed and activated. Typically, this occurs once per year.

If you receive the below error when logging into the Keeper vault, this usually indicates that the SAML Signing Certificate has expired.

"Sorry! There was an unexpected error logging you into Keeper via your company account. We are unable to parse the SAML Response from the IDP"

Resolution

To resolve this issue, please follow the basic steps below:

  1. Update the SAML signing certificate from your identity provider related to the Keeper application

  2. Download the new SAML signing certificate and/or IdP metadata file

  3. Update the IdP metadata in the Keeper Admin Console

Entra ID / Azure AD Instructions

Since Microsoft Azure is the most widely used identity provider, the step by step update guide is documented below. If Azure is not your provider, the process is very similar.

(1) Login to the Azure Portal () and go to Enterprise Applications > Keeper > Set up Single sign on

(2) Under the SAML Certificates section, note that the certificate has expired. Click Edit.

(3) Click on New Certificate to generate a new cert.

(4) Click the overflow menu and then click "Make certificate active" the Save and apply the changes.

(5) From the SAML Certificates section, download the new Federation Metadata XML file. Save this to your computer.

(6) Update the SAML Metadata in the Keeper Admin Console

From the Keeper Admin Console, login to the Keeper tenant and visit the SSO configuration.

Follow the links below to access the Keeper Admin Console: (US) (EU) (AU) (CA) (JP) (US Gov)

(Or open > Login > Admin Console)

  • Select the SSO node then select the "Provisioning" tab.

  • Click on "Single Sign-On with SSO Connect Cloud

  • Click "Edit Configuration"

  • Click out the existing SAML Metadata

  • Upload the new XML metadata file from your desktop

At this point, the SAML certificate should be updated with success.

(7) Confirm that SSO is functioning properly

Now that the metadata XML file with the latest certificate is uploaded to Keeper, your users should be able to login with SSO without error.

(8) Delete the metadata XML file from your local computer or store this in your Vault

(9) Make yourself a calendar reminder to update the SAML certificate next year prior to the expiration date.

Unable to Access the Keeper Admin Console?

If you are unable to login to the Keeper Admin Console due to the SSO certificate issue, please select one of the following options to regain access:

Option 1: Use a service account that logs into the Admin Console with a Master Password

Option 2: Contact a secondary admin to login and update the cert for you

If neither option is available, contact Keeper

https://portal.azure.com
https://keepersecurity.com/console
https://keepersecurity.eu/console
https://keepersecurity.com.au/console
https://keepersecurity.ca/console
https://keepersecurity.jp/console
https://govcloud.keepersecurity.us/console
KeeperSecurity.com
Business Support
Set up single sign on
Edit SAML Certificates
Create a New Certificate
Make certificate active
Download Metadata XML
Edit Configuration
Clear out existing SAML Metadata
Drop in the new Metadata XML

CLI Approvals

Commander Approvals

Commander Method for Automated Approvals

Keeper Commander, our CLI and SDK platform is capable of performing Admin Device Approvals for automated approval without having to login to the Admin Console. Admin approvals can be configured on any computer that is able to run Keeper Commander (Mac, PC or Linux).

This method does not require inbound connections from the Keeper cloud, so it could be preferred for environments where ingress ports cannot be opened. This method uses a polling mechanism (outbound connections only).

Install Keeper Commander

Please see the Installation Instructions here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup You can install the binary versions for Mac/PC/Linux or use pip3.

Use CLI for Device Approvals

Enter the Commander CLI using the "keeper shell" command. Or if you installed the Commander binary, just run it from your computer.

$ keeper shell
  _  __
 | |/ /___ ___ _ __  ___ _ _
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
              |_|

 password manager & digital vault   

Use the "login" command to login as the Keeper Admin with the permission to approve devices. Commander supports SSO, Master Password and 2FA. For the purpose of automation, we recommend creating a dedicated Keeper Admin service account that is specifically used for device approvals. This ensures that any changes made to the user account (such as policy enforcements) don't break the Commander process.

My Vault> login [email protected]
Password: *******

Type "device-approve" to list all devices:

My Vault> device-approve
Email               Device ID           Device Name       Client Version
------------------  ------------------  ----------------  ----------------
[email protected]  f68de375aacdff3846  Web Vault Chrome  w15.0.4
[email protected]  41sffcb44187222bcc  Web Vault Chrome  w15.0.4

To manually approve a specific device, use this command:

My Vault> device-approve --approve <device ID>

To approve all devices that come from IPs that are recognized as successfully logged in for the user previously, use this command:

My Vault> device-approve --approve --trusted-ip

To approve all devices regardless of IP address, use this command:

My Vault> device-approve --approve

To deny a specific device request, use the "deny" command:

My Vault> device-approve --deny <device ID>

To deny all approvals, remove the Device ID parameter:

My Vault> device-approve --deny

To reload the latest device approvals without having to exit the shell, use the "reload" command:

My Vault> device-approve --reload

Automatically Approving Devices every X seconds

Commander supports an automation mode that will run approvals every X number of seconds. To set this up, modify the config.json file that is auto-created. This file is located in the OS User's folder under the .keeper folder. For example: C:\Users\Administrator\.keeper\config.json on Windows or /home/user/.keeper/config.json on Mac/Linux.

Leave the existing data in the file and add the following lines :

"commands":["enterprise-down","device-approve --approve"],
"timedelay":30

JSON files need a comma after every line EXCEPT the last one.

Now when you open Commander (or run "keeper shell"), Commander will run the commands every time period specified. Example:

$ keeper shell
Executing [enterprise-down]...
Password: 
Logging in...
Syncing...

Executing [enterprise-down]...

Email               Device ID           Device Name       Client Version
------------------  ------------------  ----------------  ----------------
[email protected]  f68de375aacdff3846  Web Vault Chrome  w15.0.4

Executing [device-approve --approve]...
2020/09/20 21:59:47 Waiting for 30 seconds
Executing [enterprise-down]...
There are no pending devices to approve
.
.
.

Automatically Approving Teams and Users

Similar to the example above, Commander can automatically approve Team and User assignments that are created from SCIM providers such as Azure, Okta and JumpCloud.

To set this up, simply add one more command team-approve to the JSON config file. For example:

{
    "user": "[email protected]",
    "commands": [
        "enterprise-down",
        "device-approve --approve",
        "team-approve"
    ],
    "timedelay": 60
}

Persistent Sessions

Keeper Commander supports "persistent login" sessions which can run without having to login with a Master Password or hard-code the Master Password into the configuration file.

Commands to enable persistent login on a device for 30 days (max):

My Vault> this-device register
My Vault> this-device persistent-login on
My Vault> this-device ip-auto-approve on
My Vault> this-device timeout 30d
My Vault> quit

You can use seconds as the value (e.g. 60 for 60 seconds) or numbers and letters (e.g. 1m for one minute, 5h for 5 hours, and 7d for 7 days).

Also note that typing "logout" will invalidate the session. Just "quit" the Commander session to exit.

Once persistent login is set up on a device, the config.json in the local folder will look something like this:

{
    "private_key": "8n0OqFi9o80xGh06bPzxTV1yLeKa5BdWc7f7CffZRQ",
    "device_token": "R2O5wkajo5UjVmbTmvWnwzf7DK1g_Yf-zZ3dWIbKPOng",
    "clone_code": "retObD9F0-WDABaUUGhP0Q",
    "user": "[email protected]",
    "server": "keepersecurity.com"
}

Additional information about persistent login sessions and various options is available at this link.

There are many ways to customize, automate and process automated commands with Keeper Commander. To explore the full capabilities see the Commander documentation.

https://www.keepersecurity.com/assets/files/KeeperLogos.zipwww.keepersecurity.com
Zip File with Several Sizes and Options

Advanced Settings

Configuration settings and features on Automator

Overview

The settings in this document control the features and security of the Automator service.


Setting: automator_debug

Env Variable: AUTOMATOR_DEBUG

Description: This is an easier way to turn on/off debug logging in Automator.


Setting: automator_config_key

Env Variable: AUTOMATOR_CONFIG_KEY

Default: Empty

Description: Base64-url-encoded 256-bit AES key. This is normally only used as an environment variable. (since v3.1.0). This setting is required to load the encrypted configuration from the Keeper cloud if there is no shared /usr/mybin/config file storage between container instances.


Setting: automator_host

Env Variable: AUTOMATOR_HOST

Default: localhost

Description: The hostname or IP address where the Automator service is listening locally. If SSL is enabled (ssl_mode parameter), the automator_host value needs to match the SSL certificate subject name. The setting disable_sni_check can be set to false if the subject name does not match.

If the service is running on a machine with multiple network IPs, this setting will bind the Automator service to the specified IP.

If you encounter binding errors in the service startup, it is recommended to use the local network IP address in the host setting instead of localhost.


Setting: automator_port

Env Variable: AUTOMATOR_PORT

Default: 8089

Description: The port where the Automator listens. If running in Docker, use the default 8089.


Setting: disable_sni_check

Env Variable: DISABLE_SNI_CHECK

Default: false

Description: Disable the SNI check against the certificate subject name, if SSL is being used.


Setting: email_domains

Env Variable: EMAIL_DOMAINS

Default: null

Description: A comma-separated list of user email domains for which Automator will approve devices or teams. Example: "example.com, test.com, mydomain.com". This depends on the filter_by_email_domains setting to be enabled as well.


Setting: filter_by_email_domains

Env Variable: FILTER_BY_EMAIL_DOMAINS

Description: If true, Keeper will consult the email_domains list. If false, the email_domains list will be ignored.


Setting: enabled

Env Variable: N/A

Default: false

Description: This determines if Automator is enabled or disabled.


Setting: enable_rate_limits

Env Variable: ENABLE_RATE_LIMITS

Default: false

Description: If true, Automator will rate limit incoming calls per the following schedule:

approve_device: 100 calls/minute with bursts to 200

approve_teams_for_user: 100 calls/minute with bursts to 200

full_reset: 4 per minute, with bursts to 6

health: 4 per minute

initialize: 4 per minute, with bursts to 6

setup: 4 per minute, with bursts to 6

status: 5 per minute


Setting: ip_allow and ip_deny

Env Variable: IP_ALLOW and IP_DENY

Default: ""

Description: This restriction allows users to be eligible for automatic approval. Users accepted by the IP restriction filter still need to be approved in the usual way by Automator. Users denied by the IP restriction filter will not be automatically approved.

If "ip_allow" is empty, all IP addresses are allowed except those listed in the "ip_deny" list. If used, devices at IP addresses outside the allowed range are not approved by Automator. The values are a comma-separated list of single IP addresses or IP ranges. The "ip_allow" list is checked first, then the "ip_deny" list is checked.

Example 1: ip_allow=

ip_deny=

Example 2:

ip_allow=10.10.1.1-10.10.1.255, 172.58.31.3, 175.200.1.10-175.200.1.20

ip_deny=10.10.1.25


Setting: name

Env Variable: N/A

Default: Automator-1

Description: The name of the Automator. It should be unique inside an Enterprise. An automator can be referenced by its name or by its ID.


Setting: persist_state

Env Variable: N/A

Default: true

Description: If true, the Automator state will be preserved across shutdowns. Leave this on.


Setting: skill

Env Variable: N/A

Default: device_approval

Description: “device_approval” means device approval. “team_for_user_approval” means team approvals. An Automator can have multiple skills. “device_approval” is the default.


Setting: ssl_certificate

Env Variable: SSL_CERTIFICATE

Default: null

Description: A Base64-encoded string containing the contents of the PFX file used for the SSL certificate. For example, on UNIX base64 -i my-certificate.pfx will produce the required value.

Using this environment variable will override the ssl_certificate_filename setting.


Setting: ssl_certificate_file_password

Env Variable: SSL_CERTIFICATE_PASSWORD

Default: ""

Description: The password on the SSL file. If used, the key password should be empty, or should be the same. The library we use does not allow different passwords.


Setting: ssl_certificate_key_password

Env Variable: SSL_CERTIFICATE_KEY_PASSWORD

Default: ""

Description: The password on the private key inside the SSL file. This should be empty or the same as the file password.


Setting: ssl_mode

Env Variable: SSL_MODE

Default: certificate

Description: The method of communication on the Automator service. This can be: certificate, self_signed, or none. If none, the Automator server will use HTTP instead of HTTPS. This may be acceptable when Automator is hosted under a load balancer that decrypts SSL traffic.


Setting: url

Env Variable: N/A

Default: ""

Description: The URL where the Automator can be contacted.


https://<domain>/vault/#provider_name/<name>
User and Team Provisioning
Click here
reserved with Keeper
Vault Login with Email Address
JIT with Email Address
Enterprise Domain Login
https://keepersecurity.com/console
https://keepersecurity.eu/console
https://keepersecurity.com.au/console
https://keepersecurity.ca/console
https://keepersecurity.jp/console
https://govcloud.keepersecurity.us/console
reserved
SSO Identity Provider Configuration
Admin Console Login
Add Node
Add Method
SSO Connect Cloud
JIT Provisioning
Edit SSO Configuration
SAML Metadata, Attribute and Login/Logout Prefs
Go back
View Configuration
View Service Provider Configuration
Domain Reservation and Just-In-Time provisioning
Admin Console Configuration
can be found here
Activate Keeper on HENNGE
Export HENNGE Metadata
Assign Users
Assign Groups
Initially select 'Enterprise SSO Login'
Admin Console Configuration
https://docs.keeper.io/enterprise-guide/user-and-team-provisioning/jumpcloud-provisioning-with-scim
can be found here
JumpCloud General Info
Activate Keeper on Jumpcloud
Export JumpCloud Metadata
Initially select 'Enterprise SSO Login'
Admin Console Configuration
https://docs.keeper.io/enterprise-guide/user-and-team-provisioning/okta-integration-with-saml-and-scim
here
can be found here
Login as Okta Admin
Applications > Browse App Catalog
Search for Keeper
Add Application
View Configuration
Copy the Entity ID
Sign On tab
View IdP metadata
Save metadata.xml
Edit the SSO Configuration
Drag and Drop the Metadata File from Okta into Keeper
Export SP Cert from Keeper
Upload Certificate
View IdP metadata
Upload the new Metadata file with Single Logout config settings
Assign Users and Groups
Initially select 'Enterprise SSO Login'

Centrify

How to configure Keeper SSO Connect Cloud with Centrify for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Centrify

Login to the Centrify Admin portal via the cloud login.

Switch to the Admin Portal from the pull down menu.

Close the Quick Start Wizard if it pops up. Select Apps from the menu then Add Web Apps.

On the Add Web Apps window, select the Custom tab and then scroll down and choose Add for SAML.

Select Yes to “Do you want to add this application?”.

Close the Add Web Apps Window.

The next step is to upload Keeper’s SSO Metadata to Centrify. On the Keeper Admin Console, export the SAML Metadata file

Go to View -> Export Metadata

In the SAML Application Settings section in Centrify, select Upload SP Metadata.

Select Upload SP Metadata from a file and browse for the KeeperSSOMetadata.xml file. Select Ok.

Download the Identity Provider SAML Metadata. This will be uploaded to Keeper SSO Connect.

On the Description section enter Keeper SSO Connect in the Application Name field and select Security in the Category field.

Download the Keeper logo. Select Select Logo and upload the Keeper logo (keeper60x60.png).

On the User Access section select the roles that can access the Keeper App:

Under the Account Mapping section, select "Use the following..." and input mail.

On the Advanced section, append the script to include the following lines of code:

setAttribute("Email", LoginUser.Get("mail"));
setAttribute("First", LoginUser.FirstName);
setAttribute("Last", LoginUser.LastName);
setSignatureType("Response");
  • The above script reads the display name from the User Account section. The FirstName attribute is parsed from the first string of DisplayName and the LastName attribute is parsed from the second string of DisplayName.

Select Save to finish the setup.

Upload the Identity Provider SAML Metadata file into the Keeper SSO Connect Cloud instance interface by dragging and dropping the file into the edit screen:

When upload is complete, revert back one screen. The SSO integration is ready to test.

CloudGate UNO

How to configure Keeper SSO Connect Cloud with CloudGate for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

CloudGate SSO Configuration

(1) Log into the CloudGate Administrator console.

Click the Administration tile on the menu.

(2) Next, Select the Service Provider menu item and click Add Service Provider.

On the "Add Service Provider" page, search for Keeper in the search bar. Select and click the "Keeper SSO Connect Cloud" icon.

(3) Set the Display name at General Settings tab to “Keeper_SSO_Cloud_Connet” or whatever you prefer.

(4) Next, at the SSO Settings tab, you need the "Entity ID" and Other information that comes from the Keeper Admin Console.

Copy and Paste the Entity ID and Other information into the SSO Settings page in the CloudGate screen.

Your SSO ID can be found at the end of your SP Entity ID. Ex: https://keepersecurity.com/api/rest/sso/saml/3534758084794

(5) Click Add the Additional Attributes, and set Field Name to "Email" and the Value to "${MAIL_ADDRESS}". Now you can save the configuration.

(Optional) Enable Single Logout

If you would like to enable the Single Logout feature in CloudGate, go to the SSO Settings tab and enter Logout URL and then upload the SP Cert which comes from the Keeper Admin Console.

To first download the SP Cert, view the SSO configuration on Keeper and click the Export SP Cert button.

Next, Copy and Paste the SLO Endpoint information into the SSO Settings page in the CloudGate screen.

(6) Last step is to export the metadata from "IDP Information for SMAL2.0" at SSO Settings tab to import it into the Keeper SSO Connect Cloud™.

Export HENNGE Metadata

Set the IDP Type to GENERIC and upload this file into the Keeper SSO Connect Cloud™ provisioning interface by dragging and dropping the file into the edit screen:

Assign Users

From CloudGate, you can now add users at User Settings tab on User Management page.

Assign Users

Please make sure if there is "Email address" value at at User Settings tab on User Management page.

Assign Groups

Click "Save" to complete the configuration of Keeper SSO Connect Cloud with CloudGate.

Your Keeper SSO Connect setup is now complete!

CloudGate SCIM Provisioning

To enable CloudGate SCIM user and group provisioning please follow the instructions found in within the Keeper Enterprise Guides.

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

SSO Migration to Cloud | SSO Connect On-Prem | Keeper Documentationdocs.keeper.io
eper
RSA ProductsRSA
RSA SecurID Access Overview
RSA Communitycommunity.rsa.com
RSA Communitycommunity.rsa.com
SAML 2.0 Integration Guide
SAML application integrationdocs.secureauth.com
SecureAuth SAML application integration

Google Workspace User Provisioning with SCIM

Directly integrating SCIM into Google Workspace for User provisioning

This document provides instructions for provisioning users from Google Workspace to Keeper using a direct SCIM integration. This method does not support pushing Groups and Group assignments. If you require group push and group assignments, see the next guide: .

Overview

User Provisioning provides several features for lifecycle management:

  • New users added to Google Workspace will get an email invitation to set up their Keeper vault

  • Users can be assigned to Keeper on a user or team basis

  • When a user is de-provisioned, their Keeper account will be automatically locked

From the Keeper Admin Console, go to the Provisioning tab for the Google Workspace node and click "Add Method".

Select SCIM and click Next.

Click on "Create Provisioning Token"

The URL and Token displayed on the next screen will be provided to Google in the Google Workspace Admin Console. Save the URL and Token in a file somewhere temporarily and then click Save.

Make sure to save these two parameters (URL and Token) and then click Save or else provisioning will fail.

Back on the Google Workspace admin console, go to Home > Apps > SAML Apps and click on the "Provisioning Available" text of the Keeper app you set up.

Select Configure auto-provisioning towards the bottom of the page.

STEP 1: App authorization

Paste the Access Token previously saved when you created your SCIM Provisioning Method in the Keeper Admin Console and select CONTINUE.

STEP 2: Endpoint URL

Paste the Endpoint URL previously saved when you created your SCIM Provisioning Method in the Keeper Admin Console and select CONTINUE.

STEP 3: Default Attribute Mappings

Leave the default Attribute mappings as they are and click CONTINUE.

STEP 4: Provisioning Scope

If you will be provisioning all users assigned to the Keeper SSO Connect app, you can simply select CONTINUE.

STEP 5: Deprovisioning

At the Deprovisioning Screen, you can simply select FINISH to automate the deprovisioning of your users.

Activate Auto-provisioning

Once Auto-provisioning setup is finished, you will be taken back to the details screen of the Keeper App. You will find the Auto-Provisioning is inactive. Toggle this to Active

Once toggled, a Pop-Out window will appear Confirming that you are ready to turn on Auto-Provisioning. Select TURN ON.

You will be taken back to the details screen of the Keeper App. You now see Auto-Provisioning is Active.

Auto-provisioning is complete. Moving forward, new users who have been configured to use Keeper, in Google Workspace and are within the provisioning scope definitions, will receive invites to utilize the Keeper Vault and be under the control of Google Workspace.

User Provisioning / SCIM without SSO

If you would like to provision users to Keeper via Google Workspace SCIM provisioning, but you do NOT want to authenticate users via SSO, please follow the below instructions:

  1. Following the same steps, as above to setup SSO, during the Service Provider Details Screen, you will replace the ACS URL and the Entity ID with the values that point to a domain in your control but is a "NULL" value in which has no communicable source. Ex: Entity ID= ACS URL=

  2. Once Keeper application is set up in Google Workspace, turn on the automated provisioning method as described, above, in this document.

Note: Google does not currently support Group provisioning to Keeper teams.

Troubleshooting

If you receive the error "not_a_saml_app" please ensure that you have turned "Auto-provisioning" to "ON" in the SAML application.

Google Certificate Updates

Google's IdP x.509 certificates for signing SAML assertions are set to expire after 5 years. In the Google Workspace "Manage Certificates" section, you should make note of the expiration and ensure to set a calendar alert in the future to prevent an outage.

When the certificate is expiring soon, or if the certificate has expired, you can follow the instructions below.

  1. Login to Google Workspace Admin Console:

  2. Click on Apps then select Web and Mobile Apps.

  3. Select Keeper app

  4. Expand service provider

  5. Click “Manage Certificates”

  6. Click “ADD CERTIFICATE”

  7. Click “DOWNLOAD METADATA”

  8. Save the metadata file. This is the IdP metadata.

  9. Login to the Keeper Admin Console

  10. Navigate to Admin > SSO Node > Provisioning > Edit SSO Cloud provisioning method

  11. Upload the Google IdP metadata into Keeper

For more information on this topic, see Google's support page:

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Imprivata

How to configure Keeper SSO Connect Cloud with Imprivata OneSign for seamless and secure SAML 2.0 authentication.


Please complete the steps in the section first.

Step 1: Configure Imprivata

You'll need to provide some information about Keeper SSO Connect Cloud to your Identity Provider application such as:

  • Entity ID

  • IDP Initiated Login

  • Assertion Consumer Service (ACS) Endpoint

  • Single Logout Service (SLO) Endpoint

  • SP Metadata file or the Keeper SP Certificate file.

To obtain this information, locate your SSO Connect Cloud Provisioning method within the Keeper Admin Console, and select View. From there you have access to download the Keeper metadata file, service provider (SP) certificate file as well as the direct URLs and configuration information (if your identity provider application does not support uploading of the metadata file).

Refer to your identity provider application configuration guide for instructions on how to upload service provider metadata and or manually inputting the required SAML response configuration fields.

Step 2: Obtain your IdP Metadata

To import your IdP Metadata into Keeper, you will need to have a properly formatted metadata file. If your SSO Identity Provider Application has the ability to export its metadata file, this would be the most expedient and preferred method to import your metadata into your Keeper SSO Connect Cloud Provisioning method.

If you do not have the ability to export / download your metadata file from your identity provider, please create a properly formatted metadata file. Refer to your SSO application's configuration guide for instructions.

Below is an example / template of what a simple identity provider metadata.xml file, against Keeper SSO Connect Cloud should look like. If you need to use this example / template to get you started, please Copy, Paste, Modify and add any other fields, in accordance to your IdP information, in your preferred .xml or .txt editor.

Please DO NOT remove any fields as this example contains the minimum required fields to connect your SSO application to Keeper.

Name
Description

Step 3: Map User Attributes

Keeper requires that you map specific User Attributes to be sent during authentication. Default Keeper SSO Connect Cloud User Attributes are Email, First and Last, as outlined in the table below. Ensure your identity provider's User Attributes are lined up with Keeper's attributes. Refer to your Identity Provider's configuration guide for instructions.

Your IdP User Attributes
Keeper User Attributes

Step 4: Upload IdP Metadata to Keeper

Once you have completed creating your identity provider metadata file, or if you have downloaded the identity provider metadata file, head back to the Keeper Admin console, locate your SSO Connect Cloud Provisioning method and select Edit.

Scroll down to the Identity Provider section, set IDP Type to GENERIC, select Browse Files and select the Metadata file you created.

Still within the Keeper Admin Console, exit the Edit View and select View on your SSO Connect Cloud Provisioning method. Within the Identity Provider section you will find the metadata values for the Entity ID, Single Sign On Service and Single Logout Service Endpoint that are now populated.

Graphic Assets

If your identity provider requires an icon or logo file for the application, please see the .

Success! Your Keeper Security SSO Cloud setup is now complete! You may now try logging into Keeper with SSO.

If you find that your application is not functional, please review your identity provider application settings and review your metadata file and user attributes for any errors.

Once complete, repeat Step 4.

If you need assistance, please email [email protected].

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Traitware

How to configure Keeper SSO Connect Cloud with Traitware for Passwordless login to Keeper.

Configure Keeper for Traitware Integration

Visit the and login as the Keeper Administrator. (US / Global) (EU-hosted customers) (AU-hosted customers) (GovCloud customers)

Note: Passwordless integration can only be applied to specific nodes (e.g. organizational units) within your Admin Console.

Click on the Admin tab and click Add Node.

From the Provisioning tab, click Add Method

Select Single Sign-On with SSO Connect™ Cloud and click Next

Enter your Configuration Name and Enterprise Domain, then click Save. Take note of the Enterprise Domain. This will be used later for Enterprise SSO login.

The newly-created SAML 2.0 with Cloud SSO Connect provisioning method will be visible. Select View from the menu.

Note the Entity ID and Assertion Consumer Service (ACS) Endpoint. These values will be used when configuring TraitWare.

Configure TraitWare

Log into the TraitWare Admin Console (TCC)

Generate Application Key

Select the Signing Keys from the left menu. Click Generate new Key Pair button. Enter the application name for the key pair. Select desired Lifetime in Years, Product Key Type and Product Key Size. Click Generate Key.

Create Traitware Application

  1. Select Applications from the left menu and click Add Application.

  2. Select SAML 2.0.

  3. Click Use a Template and select Keeper

  4. Insert your Keeper Entity ID and Assertion Consumer Service (ACS) Endpoint noted previously in the walkthrough and click Submit.

Configure SAML 2.0 Integration

  1. From the Traitware Admin Console Applications tab, select Keeper

  2. Select the Provider Credentials tab and click the download icon for Traitware IdP SAML Metadata (XML)

  3. Click Save Application

  4. Return to the Keeper Admin Console

  5. Edit the SAML 2.0 with Cloud SSO Connect™ provisioning method

  6. Upload the file from step 2 to the SAML Metadata field

Create and Enable Users to Login to Keeper Vault through Traitware

  1. From the Traitware Admin Console Users tab, select Create User

  2. Complete the form and click Save Changes

  3. Click on the newly created user and select the Applications tab

  4. Toggle Application Access on for Keeper

Note: A user with the same email address must also exist within the Keeper Admin Console. For more information on creating Keeper users, see in our enterprise documentation.

Enable All Traitware Users to Login to Keeper Vault through Traitware

  1. From the Traitware Admin Console Applications tab, select Keeper

  2. Click Enable All User Access

  3. Confirm the action and click Enable Access

End User Login

Users may login either using their enterprise domain or email address.

Login Using Email Address

  1. Navigate to the Keeper Vault

  2. Enter your email address and click Next

  3. From your Traitware app on your smart device, scan the QR code on your desktop browser

  4. You will now be logged in to your Keeper vault

Login Using Enterprise Domain

  1. Navigate to the Keeper Vault

  2. Click the Enterprise SSO Login dropdown and select Enterprise Domain

  3. Enter the Enterprise Domain name you specified in the Keeper portion of this walkthrough and click Connect

  4. From your Traitware app on your smart device, scan the QR code dispalyed on your desktop browser

  5. You will now be logged in to your Keeper vault

Custom SSL Certificate

How to configure Keeper Automator with a custom SSL certificate

Overview

Keeper Automator encrypts the communication between the Keeper backend and the Automator service running in the customer's environment.

If a custom certificate is not used, Keeper Automator will generate a self-signed certificate by default.

If SAML is configured to encrypt the request (not just signing), a custom SSL certificate is required.

You can obtain a quick, easy, and free SSL certificate at . Or if you prefer to have more control over each step of the process, you can proceed with the following instructions.

Generate and Prepare the SSL Certificate

Keeper Automator requires a valid signed SSL certificate that has been signed by a public certificate authority. The process of generating an SSL certificate varies depending on the provider, but the general flow is documented here.

Follow these steps to create the two certificate files needed for automator to run, which must be named ssl-certificate.pfx and ssl-certificate-password.txt

(1) Using the openssl command prompt, generate a private key

(2) Generate a CSR, making sure to use the hostname which you plan to use for Automator. In this case, we will be using automator.lurey.com. The important item here is that the Common Name matches exactly to the domain.

(3) Purchase an SSL certificate (or grab a free 90 day cert) and Submit the CSR to your SSL certificate provider.

Ensure that the SSL certificate created for your Automator instance is only used for this purpose. Do not use a wildcard certificate that is shared with other services.

If you don't have a provider already, you can use is: . The least expensive SSL cert for one domain is fine.

Choose a URL and create a certificate for a domain that is specific for Automator, e.g. automator.company.com.

The SSL certificate provider will deliver you a zip file that contains a signed certificate (.crt file) and intermediate CA cert. The bundle may be in either .crt or .ca-bundle file extension type. Unzip this file into the same location as your .key file that you created earlier.

(4) After the certificate has been issued, it needs to be converted using OpenSSL to .pfx format including the full certificate chain (root, intermediate and CA cert).

On , make sure to launch the OpenSSL command prompt and navigate to the folder that has your files.

Set your export password when prompted. Then create a new text file called ssl-certificate-password.txt and put the export password into that file and save it.

  • automator.key is the private key generated in step 1.

  • automator.yourcompany.com.crt is the signed certificate delivered in step 3.

  • automator.yourcompany.com.ca-bundle is the CA bundle

  • ssl-certificate.pfx is the output file used by Automator that has been encrypted with a password.

  • ssl-certificate-password.txt contains the password used to encrypt the .pfx file.

We recommend to save all 5 files in your Keeper vault.

Ensure that your .pfx file contains your issued cert AND the full certificate chain from your provider. If you don't provide a full certificate chain, the communication will fail and Automator will be unable to connect to your URL. To check the .pfx, use openssl: openssl pkcs12 -in ssl-certificate.pfx -info If the .pfx is correct, you will see 3 certificates.

If you only see one certificate, or if you see four or five certificates, the .pfx is incorrect and you need to repeat the process.

(5) Save ssl-certificate.pfx and ssl-certificate-password.txt for the deployment steps later in this guide.

Please also ensure that you have backed up the files in your Keeper vault so that you can refer to these later when updating the service or re-keying the certificate.

(6) Review the annual certificate update process .

Using Windows

Generate and Prepare the SSL Certificate

Keeper Automator requires a valid signed SSL certificate that has been signed by a public certificate authority. We do not support self-signed certificates. The process of generating an SSL certificate varies depending on the provider, but the general flow is documented here.

Download and install OpenSSL. For convenience, a 3rd party (slproweb.com) has created a binary installer. A popular binary installer is linked below:

Install the version at the bottom labeled "Win32 OpenSSL vX.X.X Light"

During install, the default options can be selected. In the install process, you may be asked to also install a Microsoft Visual Studio extension. Go ahead and follow the instructions to install this extension before completing the OpenSSL setup.

Run the OpenSSL Command Prompt

In your Start Menu there will be an OpenSSL folder. Click on the Win32 OpenSSL Command Prompt.

Annual Renewal Process

On an annual basis, you will need to renew your SSL certificate. Most certificate providers will generate a new cert for you. After certificate renewal, replace the .pfx certificate file in your Automator instance and then restart the service. Refer to the specific automator install method documentation on the exact process for updating the file and restarting the service.

For environments using AD FS ...

If you are using Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

If you are experiencing login issues after the certificate update

After certificate renewal, sometimes it is necessary to publish a new SP certificate in your identity provider following the below steps:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert" and save the certificate file.

  • Click on "Export Metadata" and save the metadata file, which also contains the certificate.

  • Login to your Identity Provider portal and view the SSO configuration for Keeper.

  • Upload Keeper's SP certificate file (or metadata, if required) following their instructions to update the Service Provider certificate and Save.

The reason for this, is because the Automator service essentially becomes the service provider. The SSL certificate generated by the customer is used in the signing process.

Azure and AWS Deployments

If you are updating the SSL certificate in an environment that utilizes application gateways or a load balancer with a custom domain that terminates SSL, you need to also update the certificate on that device.

  • For Azure deployments using an App Gateway, the .pfx certificate must also be updated in the https listener for the gateway. Go to your Azure > Resource groups > App Gateway > Listeners and upload the new certificate.

Trusona

How to configure Keeper SSO Connect Cloud with Trusona for Passwordless login to Keeper.

Configure Keeper for Trusona Integration

Please complete the steps in the section first.

Visit the and login as the Keeper Administrator.

(US / Global) (EU-hosted customers) (AU-hosted customers) (GovCloud customers)

Note: Passwordless integration can only be applied to specific nodes (e.g. organizational units) within your Admin Console.

1) Click on the Admin tab and click Add Node

2) Name the node and click Add Node

3) From the Provisioning tab, click Add Method

4) Select Single Sign-On with SSO Connect™ Cloud and click Next

5) Enter your Configuration Name and Enterprise Domain, then click Save. Take note of the Enterprise Domain. This will be used later for Enterprise SSO login.

6) The newly-created SAML 2.0 with Cloud SSO Connect provisioning method will be visible. Select View from the menu.

These items will be used when configuring Trusona later in the documentation.

7) Note the Entity ID, Assertion Consumer Service (ACS) Endpoint and Single Logout Service Endpoint

8) Click Export SP Cert

Configure Trusona

1) Log into the Trusona Dashboard at scanning the QR code from your mobile device using the Trusona app for or .

Create Keeper Integration in Trusona

2) From your Trusona account dashboard, select Keeper from the left-hand navigation.

3) Click Create Keeper Integration.

4) Name the integration and click Save.

5) Click Download XML to download the XML metadata for use in the Keeper Admin Console.

6) Select Keeper on the left-hand navigation.

7) Click Edit from the Actions dropdown menu for your integration.

8) Paste the following information noted earlier in the documentation when creating the integration in the Keeper Admin Console in the corresponding field:

  • Assertion Consumer Service (ACS) Endpoint

  • IDP Initiated Login Endpoint

  • Single Logout Service (SLO) Endpoint

9) Under Certificate, upload the SP Cert exported from the Keeper Admin Console and Click Save.

10) Return to the Keeper Admin Console

11) Optionally enable Just-In-Time Provisioning to allow users to create accounts in the node by typing in the Enterprise Domain name when signing up.

12) Under SAML Metadata, upload the metadata.xml file downloaded from the Trusona dashboard.

13) Under Identity Provider Attribute Mappings, enter the following:

  • First Name: given_name

  • Last Name: name

  • Email: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress

User Provisioning

Instructions on how to provision users with SSO Connect Cloud can be found .

End User Login

Users may login either using their enterprise domain or email address.

Login Using Email Address

  1. Navigate to the Keeper Vault

  2. Enter your email address and click Next

  3. From your Trusona app on your smart device, scan the QR code on your desktop browser

  4. You will now be logged in to your Keeper vault

Login Using Enterprise Domain

  1. Navigate to the Keeper Vault

  2. Click the Enterprise SSO Login dropdown and select Enterprise Domain

  3. Enter the Enterprise Domain name you specified in the Keeper portion of this walkthrough and click Connect

  4. From your Trusona app on your smart device, scan the QR code displayed on your desktop browser

  5. You will now be logged in to your Keeper vault

Veridium Login Demo
Logo
Admin Console Configuration
Keeper Admin Console
https://keepersecurity.com/console
https://keepersecurity.eu/console
https://keepersecurity.com.au/console
https://govcloud.keepersecurity.us/console
https://dashboard.trusona.com/
iOS
Android
here
Create a node for Trusona in the Keeper Admin
Configure Trusona for Single Sign-On with SSO Connect™ Cloud
View Trusona Provisioning Settings
Note the highlighted fields and Export SP Cert
Logo
<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor entityID="MySSOApp" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
    <md:IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol" WantAuthnRequestsSigned="true">
        <md:KeyDescriptor use="signing">
            <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                    <ds:X509Certificate>MIIDpDCCAoygAwIBAgIGAW2r5jDoMA0GCSqGSIb3DQEBCwUAMIGSMQswCQYDVQQGEwJVUzETMBEG
                        A1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzENMAsGA1UECgwET2t0YTEU
                        MBIGA1UECwwLU1NPUHJvdmlkZXIxEzARBgNVBAMMCmRldi0zODk2MDgxHDAaBgkqhkiG9w0BCQEW
                        DWluZm9Ab2t0YS5jb20wHhcNMTkxMDA4MTUwMzEyWhcNMjkxMDA4MTUwNDEyWjCBkjELMAkGA1UE
                        BhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiqGcmFuY2lzY28xDTALBgNV
                        BAoMBE9rdGExFDASBgNVBAsMC1NTT1Byb3ZpZGVyMRMwEQYDVQQDDApkZXYtMzg5NjA4MRwwGgYJ
                        KoZIhvcNAQkBFg1pbmZvQG9rdGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
                        hr4wSYmTB2MNFuXmbJkUy4wH3vs8b8MyDwPF0vCcjGLl57etUBA16oNnDUyHpsY+qrS7ekI5aVtv
                        a9BbUTeGv/G+AHyDdg2kNjZ8ThDjVQcqnJ/aQAI+TB1t8bTMfROj7sEbLRM6SRsB0XkV72Ijp3/s
                        laMDlY1TIruOK7+kHz3Zs+luIlbxYHcwooLrM8abN+utEYSY5fz/CXIVqYKAb5ZK9TuDWie8YNnt
                        7SxjDSL9/CPcj+5/kNWSeG7is8sxiJjXiU+vWhVdBhzkWo83M9n1/NRNTEeuMIAjuSHi5hsKag5t
                        TswbBrjIqV6H3eT0Sgtfi5qtP6zpMI6rxWna0QIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQBr4tMc
                        hJIFN2wn21oTiGiJfaxaSZq1/KLu2j4Utla9zLwXK5SR4049LMKOv9vibEtSo3dAZFAgd2+UgD3L
                        C4+oud/ljpsM66ZQtILUlKWmRJSTJ7lN61Fjghu9Hp+atVofhcGwQ/Tbr//rWkC35V3aoQRS6ed/
                        QKmy5Dnx8lc++cL+goLjFVr85PbDEt5bznfhnIqgoPpdGO1gpABs4p9PXgCHhvkZSJWo5LobYGMV
                        TMJ6/sHPkjZ+T4ex0njzwqqZphiD9jlVcMR39HPGZF+Y4TMbH1wsTxkAKOAvXt/Kp77jdj+slgGF
                        gRfaY7OsPTLYCyZpEOoVtAyd5i6x4z0c</ds:X509Certificate>
		             </ds:X509Data>
            </ds:KeyInfo>
	      </md:KeyDescriptor>
	      <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
        <md:SingleSignOnService Location="https://sso.mycompany.com/saml2/keepersecurity"
	            Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"/>
        <md:SingleSignOnService Location="https://sso.mycompany.com/saml2/keepersecurity"
	            Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"/>
    </md:IDPSSODescriptor>
</md:EntityDescriptor>

EntityDescriptor

This is the Entity ID, sometimes referred to as "Issuer", and the unique name for your IdP application.

X509Certificate

This is the X509 Certificate, used by Keeper, to validate the signature on the SAML response sent by your Identity Provider.

NameIDFormat

This Defines the name identifier format used when logging into Keeper. Keeper supports the following types of identifiers.

urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

or

urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified

SingleSignOnService "POST"

This is your identity provider's "POST" binding used as a response to a request from Keeper.

SingleSignOnService "Redirect"

This is your identity provider's "Redirect" binding used as a response to a request from Keeper.

<Email Address>

Email

<First Name>

First

<Last Name>

Last

Admin Console Configuration
Graphic Assets page
can be found here
View Keeper SSO Connect Cloud Provisioning Method
Keeper SSO Connect Cloud Configuration Information
Edit SSO Provisioning Method
Upload your Metadata File
Your SSO Application's Metadata
Initially select 'Enterprise SSO Login'
openssl genrsa -out automator.key
openssl req -new -key automator.key -out automator.csr

Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:Illinois
Locality Name (eg, city) [Default City]:Chicago
Organization Name (eg, company) [Default Company Ltd]:Company, LLC
Organizational Unit Name (eg, section) []:Engineering
Common Name []:automator.yourcompany.com
Email Address []:[email protected]
openssl pkcs12 -export -out ssl-certificate.pfx -inkey automator.key -in automator.yourcompany.com.crt -certfile automator.yourcompany.com.ca-bundle
ZeroSSL
https://www.ssls.com/
Windows
documented below
https://slproweb.com/products/Win32OpenSSL.html
Extract the zip file to the same location as your .key file
Visual Studio Component Install
OpenSSL Install
OpenSSL Command Prompt
OpenSSL Command Prompt
Google Workspace User and Team Provisioning with Cloud Service
https://null.yourdomain.com/sso-connect
https://null.yourdomain.com/sso-connect/saml/sso
https://admin.Google.com
https://support.google.com/a/answer/7394709
can be found here
SCIM Provisioning
Default Attribute Mappings
SCIM all Users
Inactive Auto-Provisioning
Active Auto-Provisioning
Initially select 'Enterprise SSO Login'
Keeper Admin Console
https://keepersecurity.com/console
https://keepersecurity.eu/console
https://keepersecurity.com.au/console
https://govcloud.keepersecurity.us/console
https://api.traitware.com/console/login
Manual Addition of Users

Amazon AWS

How to configure Keeper SSO Connect Cloud with Amazon AWS SSO for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

AWS SSO

Log into AWS and select on AWS Single Sign-On.

On the SSO Dashboard, select Configure SSO access to your cloud applications.

On the Applications menu, select Add a new application.

Next select Keeper Security and select Add.**

Keeper is working with AWS to develop an Application Connector.

Fill in the Display name and Description (optional) in the application details section.

In the AWS SSO metadata section, select the download button to export the AWS SSO SAML metadata file. This file gets imported in the SSO Connect IdP Metadata section on the configuration screen.

Copy this file to the Keeper SSO Connect server and upload it into the Keeper SSO Connect interface by either browsing to or dragging and dropping the file into the Configuration screen's SAML Metadata area:

Next download the Keeper metadata file and upload it to the AWS Application metadata file. Navigate to the view screen of the Keeper SSO Connect Cloud™ provisioning.

Enter View Screen

Click the "Export Metadata" button to download the config.xml file.

Export Keeper Metadata

Back on the Ping Identity application configuration, select the Select File button and choose the config.xml file downloaded in the above step.

After saving changes the Configuration for Keeper Password Manager has been saved success message will be displayed.

Note: The Keeper SSL certificate cannot be larger than 2048K or the below error will be received.

  • Either, generate a smaller SSL certificate, re-export and import the metadata file or manually set the ACS URL and Audience URL in the AWS SSO application configuration.

Next, Ensure the Keeper application attributes that are to be mapped to AWS SSO are correct (These should be set by default. Select the Attribute mappings tab. The AWS string value to ${user:subject} and format is blank or unspecified. The Keeper Attributes are set as follows:

Keeper Attribute

AWS SSO String Value **

Format

Email

${user:email}

unspecified

First

${user:givenName}

unspecified

Last

${user:familyName}

unspecified

Note: If your AWS email is mapped to the AD UPN (which may not be the actual email address of your users) it can be re-mapped to the email address associated in the users AD profile.

To make this change navigate to the Connect Directory on the AWS SSO page.

Select on the Edit attribute mappings button.

Change the AWS SSO email attribute from ${dir:windowsUpn} to ${dir:email} .

Select on the the Assigned users tab and then the Assign users button to select users or groups to assign the application.

On the Assign Users window:

  • Select either Groups or Users

  • Type the name of a group or user

  • Select on the Search connect directory to initiate the search.

The results of the directory search will display under the search window.

Select the users/groups that are desired to have access to the application and then select the Assign users button.

Note: Keeper SSO Connect expects that the SAML response is signed. Ensure that your identity provider is configured to sign SAML responses.

Your Keeper SSO Connect setup is now complete!

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

Google Workspace

How to configure Keeper SSO Connect Cloud with Google Workspace for seamless and secure SAML 2.0 authentication, user provisioning and group provisioning.

Please complete the steps in the Admin Console Configuration section first.

Google Workspace supports the following integration with Keeper:

  • SSO authentication with SAML 2.0

  • Automatic Provisioning with Google Cloud APIs and SCIM (Users and Groups)

  • Automatic Provisioning with SCIM (Users only)

You can configure with SSO, SSO+Provisioning or Provisioning by itself.

Google Workspace SAML Configuration

To access Google Workspace Admin Console, login to https://admin.google.com/

Visit the Apps > Web and Mobile Apps screen.

Web and mobile apps

Then select "Add App" and select "Search for apps".

Add new Keeper SAML App

In the "Enter app name" search area, search for "Keeper" and select the "Keeper Web (SAML)" app.

Select Keeper Web (SAML) app

Setup Keeper App

Use Option 1 to Download IdP metadata and then select Continue.

Download Google Metadata

Service Provider Details

On the Service Provider Details screen, there are a few fields to fill out. You will replace the ACS URL and the Entity ID with the values that you'll be using from your SSO Connect Cloud instance.

Keeper SP Details

To obtain the ACS URL and Entity ID, locate your SSO Connect Cloud Provisioning method, within the Keeper Admin Console, and select View.

SSO Connect Cloud Info

Within the Service Provider section you will find the values for the ACS URL and Entity ID.

ACS URL and Entity ID

Copy and Paste the ACS URL, Entity ID into the Service Provider Details and select "Signed Response" and select CONTINUE.

Keeper SP Details Filled

Attribute Mapping

In the Attributes screen, ensure that there are 3 mappings exactly as they appear below. Set the mappings field to "First Name", "Last Name" and "Primary Email", as displayed below, and select Finish. You have completed your Google Workspace SAML integration into Keeper.

If you have selected / created a Custom SAML App, you'll need to click on "Add New Mapping" to create the 3 fields: First, Last and Email. The spelling needs to be exact.

Google Attributes

Keeper SAML App Details

Once complete, you will be taken to Keeper SAML App Details Page in which provides you a quick detail overview of the SAML connection and service. Click within the area where it states OFF for everyone to enable SSO for your users.

Enable SSO Connect on Everyone

To enable Keeper SSO Connect, for your users, select ON for everyone and select SAVE.

Enable SSO Connect on Groups

To enable Keeper SSO Connect on specific groups, select Groups to the left of the Service status, search and select the Group in which you want associated to the Keeper SSO Connect App, select / tick "ON" the select SAVE.

Note: Google does not currently support Group provisioning to Keeper teams.

Import Google Workspace Metadata

Back on the Keeper Admin console, locate your SSO Connect Cloud Provisioning method and select Edit.

Edit SSO Connect Cloud

Select Browse Files and select the Google Metadata file previously downloaded.

Upload Google Metadata File

You will know this was successful when your metadata file reflects within your provisioning method. You may now exit the provisioning configuration.

Note about Single Logout (SLO) Settings with Google Workspace

As of 2022, Google defaults the configuration to not enable Single Logout. This means logging out of Keeper does not initiate a full logout of Google.

SSO Setup Complete!

Your Keeper SSO Connect setup with Google Workspace is now complete! Users can now login into Keeper using their Google account by following the below steps:

  1. Open the Keeper vault and click on "Enterprise SSO Login".

  2. Type in the Enterprise Domain that was provided to the Keeper Admin Console when setting up SSO. On the SSO Connect status screen it is called "SSO Connect Domain".

  3. Click "Connect" and login with your Google Workspace credentials.

For the end-user experience (Keeper-initiated Login Flow) see the guide below: https://docs.keeper.io/user-guides/enterprise-end-user-setup-sso#keeper-initiated-login-flow

End-user Video Tour for SSO Users is here: https://vimeo.com/329680541

User and Team Provisioning

Next, we'll show how to configure User and Team Provisioning from Google Workspace. There are two methods of integrating with Google Workspace.

Option 1 (Recommended): Provisioning Users and Groups

Since Google Workspace doesn't natively support SCIM Groups, Keeper has developed a Google Cloud Function that integrates with Google Workspace for automated user and group provisioning. Step by step instructions for setting up this service is documented below:

Google Workspace User and Team Provisioning with Cloud Service

Option 2: Provisioning Users Only

To provision users directly from Google Workspace to Keeper using a direct SCIM integration, follow the guide below (this only provisions users, not groups):

Google Workspace User Provisioning with SCIM

Shibboleth

How to configure Keeper SSO Connect Cloud with Shibboleth for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Step 1: Export and Save Keeper Metadata File

To obtain your Keeper Metadata file, locate your SSO Connect Cloud Provisioning method within the Keeper Admin Console, and select View. From there you have access to download and save the Keeper metadata file.

Export Keeper Metadata File

Step 2: Adding Keeper Metadata to Shibboleth Identity Provider

The Shibboleth IdP must know some basic information about the Keeper relying party, which is defined in SAML metadata. The easiest way to do is to add your Keeper Metadata file to IDP_HOME/metadata/ directory.

Step 3: Adding a New Relying Party Trust to Shibboleth Identity Provider

Instruct Shibboleth how to behave when talking to Keeper by defining a new RelyingParty element in IDP_HOME/conf/relying-party.xml. The following snippet should be added just after the DefaultRelyingParty element. Be sure to replace the provider attribute to include your "Entity ID" (use whatever provider is configured in the DefaultRelyingParty).

<RelyingParty id="keepersecurity.com"
        provider="https://keepersecurity.com/api/rest/sso/saml/264325172298110"
        defaultSigningCredentialRef="IdPCredential">
    <ProfileConfiguration xsi:type="saml:SAML2SSOProfile" encryptAssertions="never" encryptNameIds="never" />
</RelyingParty>

Still in the IDP_HOME/conf/relying-party.xml file, configure Shibboleth to use the keeper metadata file you added in Step 2. Add the following MetadataProvider element next to the existing configured provider (it should have an id value of “FSMD”), making sure to replace IDP_HOME with your actual installation path.

<!-- Keeper Metadata -->
<MetadataProvider id="KeeperMD" xsi:type="FilesystemMetadataProvider" xmlns="urn:mace:shibboleth:2.0:metadata"
    metadataFile="IDP_HOME/metadata/keeper-metadata.xml" maintainExpiredMetadata="true" />

Step 4: Configure Attribute Resolver

Keeper requires that you map specific User Attributes to be sent during authentication. Default Keeper SSO Connect Cloud User Attributes are Email, First and Last, as outlined in the table below. Shibboleth’s attribute resolver must be configured to make this data available by modifying IDP_HOME/conf/attribute-resolver.xml.

Your IdP User Attributes

Keeper User Attributes

<Email Address>

Email

<First Name>

First

<Last Name>

Last

When Configuring Shibboleth Identity Provider SAML Attributes, Keeper Expects "NameIDFormat" coming in the form of "emailAddress". You can use / the suggested "NameIDFormat" or input correct value for your environment so long as it provides Keeper the users Email Address for the username login identifier.

Step 5: Configure Attribute FIlter

Finally, configure the Shibboleth attribute filtering engine to release the principal attribute (encoded as a NameID) to Google. Add the following XML snippet to IDP_HOME/conf/attribute-filter.xml alongside the existing policy elements.

<AttributeFilterPolicy>
    <PolicyRequirementRule xsi:type="basic:AttributeRequesterString" value="keepersecurity.com" />

    <AttributeRule attributeID="principal">
        <PermitValueRule xsi:type="basic:ANY" />
    </AttributeRule>
</AttributeFilterPolicy>

Step 6: Obtain the Metadata XML File from Shibboleth

  1. Locate Shibboleth metadata found at "http://shibboleth.example.com/idp/shibboleth" or in the Shibboleth identity provider filesystem in <install_folder>/shibboleth-idp/metadata.

  2. Modify Shibboleth metadata manually and ensure all user endpoints are uncommented (e.g., SingleLogout).

  3. Save the XML file.

Step 7: Upload IdP Metadata to Keeper

Once you have your Shibboleth metadata file ready, head back to the Keeper Admin console, locate your SSO Connect Cloud Provisioning method and select Edit.

Edit SSO Provisioning Method

Scroll down to the Identity Provider section, set IDP Type to GENERIC, select Browse Files and select your Shibboleth Metadata file.

Upload your Metadata File

Still within the Keeper Admin Console, exit the Edit View and select View on your SSO Connect Cloud Provisioning method. Within the Identity Provider section you will find the metadata values for the Entity ID, Single Sign On Service and Single Logout Service Endpoint that are now populated.

Your SSO Application's Metadata

Graphic Assets

If your Shibboleth instance requires an icon or logo file for the Keeper application, please see the Graphic Assets page.

Success! Your Keeper Security SSO Cloud setup is now complete! You may now try logging into Keeper with SSO.

If you find that SSO is not functional, please review your Shibboleth settings, review your metadata file and user attributes for any errors.

Once complete, repeat Step 4.

If you need assistance, please email [email protected].

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

Docker on Linux

How to deploy Keeper Automator in a simple docker environment

Docker on Linux

This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker.

Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page. Save the SSL certificate private keys and .pfx files in the Keeper vault.

Setup

(1) Install Docker

If you don't have Docker installed, set it up per the instructions on your platform. For example, if you use the yum package installer:

sudo yum install docker

Start the Docker service if it's not running

sudo service docker start

Then configure the service to start automatically

sudo systemctl enable docker.service

To allow non-root users to run Docker (and if this meets your security requirements), run this command:

sudo chmod 666 /var/run/docker.sock

(2) Pull the Automator image

Use the docker pull command to get the latest Keeper Automator image.

docker pull keeper/automator

(3) Start the service

Use the command below to start the service. This example below is listening to port 443.

docker run -d -p443:443/tcp \
  --name "Keeper-Automator" \
 --restart on-failure:3 \
 keeper/automator

(4) Update the certificate

Inside the docker container, create a "config" folder.

docker exec -it Keeper-Automator mkdir /usr/mybin/config

Copy the ssl-certificate.pfx file created in the Certificate Guide into the container.

docker cp ssl-certificate.pfx \
  Keeper-Automator:/usr/mybin/config/ssl-certificate.pfx

If your .pfx file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt

echo "my_pfx_password..." > ssl-certificate-password.txt

...and place it into the docker container:

docker cp ssl-certificate-password.txt \
 Keeper-Automator:/usr/mybin/config/ssl-certificate-password.txt

Make sure that the ssl_mode parameter in the keeper.properties file within the container is set to certificate.

docker exec -it Keeper-Automator sed -i 's/^ssl_mode=.*/ssl_mode=certificate/' settings/keeper.properties

(5) Restart the container with the SSL cert

Now that the certificate is installed, restart the Docker container:

docker restart "Keeper-Automator"

(6) Install Keeper Commander

At this point, the service is running but it is not able to communicate with Keeper yet.

On your workstation, server or any computer, install the Keeper Commander CLI. The installation instructions including binary installers are here: Installing Keeper Commander After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]
.
.
My Vault>

(7) Initialize with Commander

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. This is the public URL which the Keeper backend will communicate with. For example, automator.mycompany.com.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For automated health checks, you can use the below URL:

https://<server>/health

Example:

$ curl https://automator.lurey.com/health
OK

For environments using AD FS ...

When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

Securing the Service

We recommend restricting network access to the service. Please see the Ingress Requirements page for a list of IP addresses to allow.

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Service Restart

When you stop/start the Keeper Automator service, the Docker service will automatically retain state.

docker restart "Keeper-Automator"

Upgrading the Container

When a new version of Keeper Automator is available, you can update your Automator service by repeating steps 2-7 above.

For example:

docker pull keeper/automator
docker stop Keeper-Automator
docker rm Keeper-Automator

docker run -d -p443:443/tcp \
  --name "Keeper-Automator" \
 --restart on-failure:3 \
 keeper/automator

docker exec -it Keeper-Automator mkdir /usr/mybin/config

docker cp ssl-certificate.pfx \
  Keeper-Automator:/usr/mybin/config/ssl-certificate.pfx

docker cp ssl-certificate-password.txt \
 Keeper-Automator:/usr/mybin/config/ssl-certificate-password.txt

docker exec -it Keeper-Automator \
  sed -i 's/^ssl_mode=.*/ssl_mode=certificate/' settings/keeper.properties
  
docker restart "Keeper-Automator"

Then, run the Keeper Commander commands:

automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"

Troubleshooting

Service not starting

Please check the Keeper Automator logs. This usually describes the issue. In the Docker environment, you can tail the log file using this command:

docker logs -f "Keeper-Automator"

Connecting to the container to check the log file is possible using the below command:

docker exec -it "Keeper-Automator" /bin/sh

Windows Service

Keeper Automator sample implementation on a Windows server

The instructions on this page are created for customers who would like to simply run the Automator service on a Windows server without Docker.

Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.

(1) Install the Automator Service

On the Automator instance, download, unzip and run the Keeper Automator installer:

https://keepersecurity.com/automator/keeper-automator-windows.zip

In the setup screens, check the "Java" box to ensure that the Java runtime is embedded in the installation. Currently it ships with the Java 17 runtime, and this is updated as new versions are released.

Include Java in Installer

This will install Keeper Automator into:

C:\Program Files\Keeper Security\Keeper Automator\

The configuration and settings will be set up in:

C:\ProgramData\Keeper Automator\

(2) Create the "config" folder

In the C:\ProgramData\Keeper Automator\ folder please create a folder called "config".

(3) Copy the certificate file and password file

Place ssl-certificate.pfx file (from the Custom SSL Certificate page) to the Automator Configuration settings folder in C:\ProgramData\Keeper Automator\Config

If your ssl-certificate.pfx file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt in the folder C:\ProgramData\Keeper Automator\Config

SSL Certificate File and Password File

(4) Restart the Service

From the Services screen, select Keeper Automator and Restart the the service.

Start the Keeper Automator Service

Confirm the service is running through a web browser (note that port 443 must be opened from whatever device you are testing) In this case, the URL is: https://automator.company.com/api/rest/status

For automated health checks, you can also use the below URL:

https://automator.company.com/health

Windows Firewall

If you are deploying on Windows running Defender Firewall, most likely you will need to open port 443 (or whatever port you specified) on Windows Defender Firewall. Follow these steps:

Open the Start menu > type Windows Defender Firewall, and select it from the list of results. Select Advanced settings on the side navigation menu... Select Inbound Rules. To open a port, select New Rule and complete the instructions.

Here's a couple of screenshots:

Select "Port"
Enter the Port Number

Final Configuration with Commander

Now that the service is running, you can integrate the Automator into your Keeper environment using Keeper Commander.

(5) Install Keeper Commander

On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup (6) Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create and name the automator whatever you want.

automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. So let's do that next.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

If an error is generated on this step, please stop and start the Windows service, and ensure that the port is available.

Next, initialize the Automator with the new configuration with the command below:

automator init "My Automator"

Lastly, enable the Automator service with the following command:

automator enable "My Automator"

At this point, the configuration is complete.

For environments using AD FS ...

When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Login Screen
SSO Login
Device Approval
Vault Decryption

Service Updates

When you reconfigure the Keeper Automator service, you'll need to use Keeper Commander to re-initialize the service endpoint.

automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"

Troubleshooting

Service not starting

Please check the Keeper Automator logs. This usually describes the issue. On Windows, they can be found in C:\ProgramData\Keeper Automator\logs\

Users always getting prompted for approval

When you reinstall the Keeper Automator service, you'll need to use Keeper Commander to re-initialize the service endpoint. (Keeper Commander documentation is linked here).

The commands required on Keeper Commander to re-initialize your Automator instance:

$ keeper shell

My Vault> automator list
288797895952179 My Automator True https://something.company.com 

(find the Name corresponding to your Automator)

My Vault> automator setup "My Automator"
My Vault> automator init "My Automator"
My Vault> automator enable "My Automator"

Automator Video Overview

Microsoft AD FS

How to configure Keeper SSO Connect Cloud with Microsoft AD FS for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

Microsoft AD FS

Obtain Federation Metadata XML

Inside the AD FS Management application, locate the Federation Metadata xml file. This can be found by clicking on AD FS > Service > Endpoints then locate the URL path in the "Metadata" section. The path is typically /FederationMetadata/2007-06/FederationMetadata.xml as seen below:

Download the Metadata

To download the metadata file, this can typically be found by loading the URL in the browser on the server. For example: https://localhost/FederationMetadata/2007-06/FederationMetadata.xml Download this file and save to the computer.

Import Federation Metadata

From the Keeper Admin Console SSO Cloud configuration screen, select "ADFS" as the IdP type and import the Federation Metadata file saved in the previous step.

Export Keeper Metadata

Go back to the Provisioning screen and click on View.

Next download the Keeper metadata file so it can be imported during the Relying Part Trust Wizard. Navigate to the view screen of the Keeper SSO Connect Cloud™ provisioning.

Click the "Export Metadata" button to download the config.xml file. This will be used in a few steps ahead.

Finish AD FS Configuration

Important: Keeper's Cloud SSO SP Certificate is only valid for a year. On an annual basis, you will need to download the latest Keeper SP Cert from the Admin Console and upload this into the Relying Trust Party settings in AD FS.

Keeper notifies all affected customers when the certificate expiration is coming soon.

Create Relying Trust Party

Create Keeper SSO Connect as a Relying Party Trust:

Import Keeper Metadata

Import the Keeper Metadata file that was exported previously from Keeper SSO Connect Cloud view screen by completing the Relying Party Trust Wizard as seen in the steps below.

Select "Claims aware" in the Welcome screen and then select the metadata file saved from Keeper.

To prevent a logout error, change the SAML Logout Endpoints on the Relying Party Trust to: https://<YourADFSserverDomain>/adfs/ls/?wa=wsignout1.0

Create Claim Issuance Policy Rules

To map attributes between AD FS and Keeper, you need to create a Claim Issuance Policy with Send LDAP Attributes as Claims and map the LDAP attributes to Keeper Connect attributes.

Important: Ensure that 3 attributes ("First", "Last" and "Email") are configured with the exact spelling as seen above.

For Logout support we need to add two more Claim Issuance Policy rules:

To copy the syntax to add in the claims rule, copy the following text and paste it into the custom rule:

Incoming claim type: http://mycompany/internal/sessionid Outgoing claim type: Name ID Outgoing name ID format: Transient Identifier

SAML Signing Configuration

a. Open Powershell as Administrator on the AD FS server. b. Identify your SSO Connect Relying Party Trust "Identifier" string which you can obtain by running:

Running this command will generate a long list of output, you are looking for the SSO Connect section and the "Identifier" string. This string will look something like:

c. Run the below command, replacing <Identifier> with the string found in step (b).

If you run Get-ADFSRelyingPartyTrust again, you'll see that the SamlResponseSignature section is set to "MessageAndAssertion".

Restart AD FS services

From the services manager, restart AD FS service.

SAML assertion signing must be configured properly on your AD FS environment. If signing has not been configured, you will need to set this up, then exchange metadata again between AD FS and Keeper SSO Connect after the re-configuration.

Troubleshooting

If you need to disable certificate validation on the IdP for testing purposes or for internal PKI certificates, you can use the below Powershell commands. Replace <Identifier> with the string found in the "SAML Signing Configuration" instructions above.

Note: Any changes made to signing configuration may require exchange of XML metadata between IdP and SSO Connect.

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Other SAML 2.0 Providers

How to configure Keeper SSO Connect Cloud with your SSO Identity Provider for seamless and secure SAML 2.0 authentication.

Please complete the steps in the section first.

Keeper is compatible with any SAML 2.0 SSO Identity Provider (IdP). If your identity provider is not in our list, you can follow the steps in this guide to complete the configuration. Keeper is a Service Provider (SP) in this configuration.

Step 1: Configure your Identity Provider

You'll need to provide some information about Keeper SSO Connect Cloud to your Identity Provider application such as:

  • Entity ID

  • IDP Initiated Login

  • Assertion Consumer Service (ACS) Endpoint

  • Single Logout Service (SLO) Endpoint

  • SP Metadata file or the Keeper SP Certificate file.

To obtain this information, locate your SSO Connect Cloud Provisioning method within the Keeper Admin Console, and select View. From there you have access to download the Keeper metadata file, service provider (SP) certificate file as well as the direct URLs and configuration information (if your identity provider application does not support uploading of the metadata file).

Refer to your identity provider application configuration guide for instructions on how to upload service provider metadata and or manually inputting the required SAML response configuration fields.

Step 2: Obtain your IdP Metadata

To import your IdP Metadata into Keeper, you will need to have a properly formatted metadata file. If your SSO Identity Provider Application has the ability to export its metadata file, this would be the most expedient and preferred method to import your metadata into your Keeper SSO Connect Cloud Provisioning method.

If you do not have the ability to export / download your metadata file from your identity provider, please create a properly formatted metadata file. Refer to your SSO application's configuration guide for instructions.

Below is an example / template of what a simple identity provider metadata.xml file, against Keeper SSO Connect Cloud should look like. If you need to use this example / template to get you started, please Copy, Paste, Modify and add any other fields, in accordance to your IdP information, in your preferred .xml or .txt editor.

Please DO NOT remove any fields as this example contains the minimum required fields to connect your SSO application to Keeper.

Step 3: Map User Attributes

Keeper requires that you map specific User Attributes to be sent during authentication. Default Keeper SSO Connect Cloud User Attributes are Email, First and Last, as outlined in the table below. Ensure your identity provider's User Attributes are lined up with Keeper's attributes. Refer to your Identity Provider's configuration guide for instructions.

Step 4: Upload IdP Metadata to Keeper

Once you have completed creating your identity provider metadata file, or if you have downloaded the identity provider metadata file, head back to the Keeper Admin console, locate your SSO Connect Cloud Provisioning method and select Edit.

Scroll down to the Identity Provider section, set IDP Type to GENERIC, select Browse Files and select the Metadata file you created.

Still within the Keeper Admin Console, exit the Edit View and select View on your SSO Connect Cloud Provisioning method. Within the Identity Provider section you will find the metadata values for the Entity ID, Single Sign On Service and Single Logout Service Endpoint that are now populated.

Graphic Assets

If your identity provider requires an icon or logo file for the application, please see the .

Success! Your Keeper Security SSO Cloud setup is now complete! You may now try logging into Keeper with SSO.

If you find that your application is not functional, please review your identity provider application settings and review your metadata file and user attributes for any errors.

Once complete, repeat Step 4.

If you need assistance, please email [email protected].

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin can not move themselves to the SSO enabled node. It requires another admin to perform this action.

After the user is moved to the SSO enabled node, they need to log into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password.

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

They won't have to enter the Enterprise Domain. If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation .

Beyond Identity

How to configure Keeper SSO Connect Cloud with Beyond Identity for Passwordless login to Keeper.

Configure Keeper for Beyond Identity Integration

Please complete the steps in the section first.

Visit the and login as the Keeper Administrator.

(US / Global) (EU-hosted customers) (AU-hosted customers) (GovCloud customers)

Note: Passwordless integration can only be applied to specific nodes (e.g. organizational units) within your Admin Console.

1) Click on the Admin tab and click Add Node

2) Name the node and click Add Node

3) From the Provisioning tab, click Add Method

4) Select Single Sign-On with SSO Connect™ Cloud and click Next

5) Enter your Configuration Name and Enterprise Domain, then click Save. Take note of the Enterprise Domain. This will be used later for Enterprise SSO login.

6) The newly-created SAML 2.0 with Cloud SSO Connect provisioning method will be visible. Select View from the menu.

These items will be used when configuring Beyond Identity later in the documentation.

7) Note the Entity ID, Assertion Consumer Service (ACS) Endpoint and Single Logout Service Endpoint

8) Click Export SP Cert

Configure Beyond Identity

1) for your device.

2) Log into the Beyond Identity Admin Console at .

Instructions for registering and using Beyond Identity can be found in

Create Keeper Integration in Beyond Identity

3) From your Beyond Identity Admin Console, select Integrations from the left-hand navigation.

4) Click the SAML tab.

5) Click Add SAML Connection.

6) In the Edit SAML Connection dialog, use the following table to determine values to enter:

Beyond Identity Field
Value to Use

7) In the Attribute Statements section, add the following two attributes:

Name
Name Format
Value

8) Click Save Changes.

9) Click the Download Metadata icon </> to download the XML metadata for use in the Keeper Admin Console.

10) Return to the Keeper Admin Console

11) Click Edit on the Beyond Identity provisioning method to view the configuration settings.

12) Optionally enable Just-In-Time Provisioning to allow users to create accounts in the node by typing in the Enterprise Domain name when signing up.

13) Under SAML Metadata, upload the metadata.xml file downloaded from the Beyond Identity Admin Console.

User Provisioning

Instructions on how to provision users with SSO Connect Cloud can be found .

End User Login

Users may login either using their enterprise domain or email address.

Login Using Email Address on desktop with Beyond Identity Authenticator installed

1) Navigate to the Keeper Vault

2) Enter your email address and click Next

3) You will now be logged in to your Keeper vault

Login Using Enterprise Domain on desktop with Beyond Identity Authenticator installed

1) Navigate to the Keeper Vault

2) Click the Enterprise SSO Login dropdown and select Enterprise Domain

3) Enter the Enterprise Domain name you specified in the Keeper portion of this walkthrough and click Connect

4) You will now be logged in to your Keeper vault

Login Using Enterprise Domain with Beyond Identity installed for iOS or Android

1) Navigate to the Keeper Vault

2) Tap Use Enterprise SSO Login dropdown

3) Enter the Enterprise Domain you specified in the Keeper portion of this walkthrough and tap Connect

4) Accept the push notification from the Beyond Identity App

5) You will now be logged in to your Keeper vault

Login Using Email Address with Beyond Identity installed for iOS or Android

1) Open the Keeper App

2) Enter your email address and click Next

3) Accept the push notification from the Beyond Identity App

4) You will now be logged in to your Keeper vault

Java on Linux

Keeper Automator sample implementation using standalone Java service

This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker.

Make sure you already have your SSL Certificate! If not, please follow the steps in the page.

Standalone Java Service

(1) Install Java

In preparation of the service, ensure that at least Java 17 is installed. In a standard Amazon AWS Linux 2 instance, the Amazon Corretto Java 17 SDK can be installed using the below command:

To check which version is running, type:

(2) Install the Service

From the Automator instance, download and unzip the Keeper Automator service:

(3) Create the config folder

If the folder does not exist, create the a "config" folder in the extracted location.

(4) Copy the .pfx and password file

Upload the .pfx file created in the page to the Automator's config/ folder and make sure the filename is called ssl-certificate.pfx.

For example, using scp:

If your ssl-certificate.pfx file is protected by a passphrase, you also need to create a file called ssl-certificate-password.txt and place it into the docker container:

For example:

(5) Edit Service Settings

The file called keeper.properties located in the settings/ folder is used to manage the advanced configuration parameters of the service. Common parameters that may need to be edited include:

  • automator_host

  • automator_port

  • ssl_certificate

See the page for details on each parameter.

(6) Start the Service

From the Automator instance, start the service using java -jar. In this example below, it is run in the background using nohup.

On Windows command line or powershell, the command must be executed exactly per below:

(7) Check Service Status

Confirm the service is running through a web browser (note that port 443 must be opened from whatever device you are testing) In this case, the URL is: https://<server>/health

This URL can also be used for automated health checks.

Example:

Now that the service is running, you need to integrate the Automator into your environment using Keeper Commander.

Final Configuration with Commander

Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.

On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here: After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

The output of the command will display the Automator settings, including metadata from the identity provider.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

Next, send other IdP metadata to the Automator:

Enable the Automator service

At this point, the configuration is complete.

For environments using AD FS ...

When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

Securing the Service

We recommend restricting network access to the service. Please see the section for a list of IP addresses to allow.

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Service Restart

When you stop/start the Keeper Automator service, or if you restart the server, you may need to use Keeper Commander to re-initialize the service endpoint.

Troubleshooting

Service not starting

Please check the Keeper Automator logs. This usually describes the issue. On Linux, the logs are located in the install directory.

Users always getting prompted for approval

When you reconfigure the Keeper Automator service, you may need to use Keeper Commander to re-initialize the service endpoint. (Keeper Commander documentation is ).

The commands required on Keeper Commander to re-initialize your Automator instance are below:

Docker Compose

Installation of Keeper Automator using the Docker Compose method

This guide provides step-by-step instructions to publish Keeper Automator on any Linux instance that can run Docker and Docker Compose.

Make sure you already have your SSL Certificate! If not, please follow the steps in the page.

Docker Compose benefits over standard Docker:

  • Data is preserved between container updates

  • Future updates are simple to install and maintain

Instructions for installing Automator using the Docker Compose method are below.

(1) Install Docker and Docker Compose

Instructions for installing Docker and Docker Compose vary by platform. Please refer to the official documentation below:

On Linux, a quick guide to installing Docker and Docker Compose:

Note: On Linux you may use docker-compose instead of docker compose.

After installing, you may still need to start the Docker service, if it's not running.

Then configure the service to start automatically

To allow non-root users to run Docker (and if this meets your security requirements), run this command:

(2) Create docker-compose.yml file

Save the snippet below as the file docker-compose.yml on your server, in the location where you will be executing docker compose commands.

(3) Install the Container and Start it up

(4) Copy the SSL Certificate and password file created from page

(5) Restart the service with the new cert

(6) Install Keeper Commander

At this point, the service is running but it is not able to communicate with Keeper yet.

On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here: After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

(7) Initialize with Commander

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

The output of the command will display the Automator settings, including metadata from the identity provider.

Note that the "URL" is not populated yet. Edit the URL with the FQDN you selected.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

Initialize the Automator with the new configuration

Enable the service

At this point, the configuration is complete.

For automated health checks, you can use the below URL:

https://<server>/health

Example:

Monitoring Logs

The Automator logs can be monitored by using the Docker Compose command:

For environments using AD FS ...

When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

Securing the Service

We recommend restricting network access to the service. Please see the section for a list of IP addresses to allow.

Updating

When a new version of Automator is available, updating the container is the only requirement.

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Security and User Flow

Technical description of Keeper SSO Connect Cloud

Zero-Knowledge Architecture

Keeper is a Zero Knowledge security provider. Zero Knowledge is a system architecture that guarantees the highest levels of security and privacy by adhering to the following principles (in the SSO Cloud model)

  • Data is encrypted and decrypted at the device level (not on the server)

  • The application never stores plain text (human readable) data

  • The server never receives data in plain text

  • No Keeper employee or 3rd party can view the unencrypted data

  • The keys to decrypt and encrypt data are controlled by the user (and the Enterprise Administrator)

  • Multi-Layer encryption provides access control at the user, group and admin level

  • Sharing of data uses Public Key Cryptography for secure key distribution

Data is encrypted locally on the user’s device before it is transmitted and stored in Keeper’s Cloud Security Vault. When data is synchronized to another device, the data remains encrypted until it is decrypted on the other device.

Keeper is the most secure, certified, tested and audited password security platform in the world. We are the only SOC 2 and ISO 27001 certified password management solution in the industry and Privacy Shield Compliant with the U.S. Department of Commerce's EU-U.S. Privacy Shield program, meeting the European Commission's Directive on Data Protection. Not only do we implement the most secure levels of encryption, we also adhere to very strict internal practices that are continually audited by third parties to help ensure that we continue to develop secure software and provide the world’s most secure cybersecurity platform.

Encryption Model for SSO Connect Cloud

Keeper SSO Connect Cloud provides Keeper Enterprise customers with a method of authenticating a user and decrypting stored data in a zero-knowledge encrypted vault, with authentication provided through a 3rd party identity provider (IdP) utilizing standard SAML 2.0 protocols in a fully cloud environment.

In this implementation, a user can authenticate through their SSO identity provider and then decrypt the ciphertext of their vault locally on their device. Each device has its own EC (Elliptic Curve) public/private key pair and encrypted data key. Each user has their own Data Key. To sign into a new device, the user must utilize existing devices to perform an approval or an administrator with the privilege can approve a new device.

The importance of this new capability is that the user can decrypt their vault using an encrypted key stored in the Keeper cloud. Zero knowledge is preserved because the Keeper cloud is unable to decrypt the user's Data Key on their device. The Data Key ("DK") of the user is decrypted with the device private key ("DPRIV"), and the Encrypted Data Key ("EDK") is only provided to the user upon successful authentication from their designated identity provider (e.g. Okta, Azure, AD FS).

Security for SSO Connect Cloud

For SSO Connect Cloud users, an Elliptic Curve private key is generated and stored locally on each device. For Chromium-based web browsers, the Keeper Vault stores the local device EC private key ("DPRIV") as a non-exportable CryptoKey. On iOS and Mac devices, the key is stored in the device KeyChain. Where available, Keeper utilizes secure storage mechanisms.

The Device Private Key is not directly utilized to encrypt or decrypt vault data. Upon successful authentication from the Identity Provider, a separate key (that is not stored) is utilized for decryption of the vault data. Offline extraction of the local Device Private Key cannot decrypt a user's vault.

Different devices/platforms have varying levels of security, and so in order to provide optimal security we recommend using an up-to-date Chromium-based web browser.

As general protection against compromised device attacks, we also recommend that all devices (such as desktop computers) are protected with disk-level encryption and up-to-date anti-malware software.

Device Approval

To sign into a new device, the user must utilize existing devices to perform an approval or an administrator with the privilege can approve a new device. New devices generate a new set of public/private keys, and the approving device encrypts the user's data key with the public key of the new device. The new device’s encrypted data key (EDK) is provided to the requesting user/device and then the user is able to decrypt their data key, which then decrypts the user's vault data. Within the decrypted vault data the user can decrypt their other private encryption keys such as record keys, folder keys, team keys, etc.

The importance of this capability is that the user can decrypt their vault using an encrypted key stored by the Keeper cloud, and does not require any on-prem or user-hosted application services to manage the encryption keys. Zero knowledge is preserved because the Keeper cloud is unable to decrypt the user's Data Key on their device. The Data Key of the user is decrypted with the device private key (DPRIV), and the EDK is only provided to the user upon successful authentication from their designated identity provider (e.g. Okta, Azure, AD FS).

From an administrator's perspective, the benefits are: easy setup and no required hosted software to manage encryption keys as described in Keeper's current SSO Connect encryption model.

The only workflow change in this model (compared to on-prem implementation of Keeper SSO Connect) is that the user must perform new device approval on an active device, or delegate the responsibility to a Keeper Administrator to perform device approval.

Login Flows

Keeper SSO Connect Cloud supports both SP-initiated and IdP-initiated login flows, described below.

1) SP-initiated Login (using "Enterprise Domain")

  • From Keeper, user types in the "Enterprise Domain" on the vault login screen

  • Keeper retrieves the configured SAML Login URL for the Keeper SSO Cloud instance (for example, https://keepersecurity.com/api/rest/sso/saml/login/12345678)

  • User is redirected to the SAML Login URL

  • Keeper sends an encoded SAML request to the IdP with the Entity ID and our public key, along with a "Relay State" which identifies the session.

  • User signs into the IdP login screen as usual

  • After successful sign-in to the IdP, the user is redirected back to Keeper at the pre-defined "ACS URL" (this can be via "Redirect" or "Post", depending on the IdP configuration).

  • The SAML message from the IdP to Keeper contains a signed assertion that validates the user has successfully authenticated at the IdP. Keeper validates the signed assertion.

  • SAML Attributes "First", "Last" and "Email" are provided by the IdP to Keeper.

  • Keeper SSO Connect Cloud redirects the user to the vault

  • If the user's device is not recognized, Keeper performs device verification (via "Keeper Push" or "Admin Approval")

  • After successful device verification and key exchange, Keeper provides user with Encrypted Data Key

  • User decrypts their data key locally with their Device Private Key

  • User decrypts their vault with their Data Key

2) SP-initiated Login (using Email)

  • From Keeper's vault login screen, user types in their email address

  • If the user is using a verified device, the email is looked up and converted into a SAML Login URL

  • If the device is not recognized, Keeper looks at the domain portion (@company.com) and retrieves the configured SAML Login URL for the Keeper SSO Cloud instance (for example, https://keepersecurity.com/api/rest/sso/saml/login/12345678)

  • User is redirected to the Keeper Login URL

  • Same steps as SP-initiated Login are followed.

3) IdP-initiated Login

  • User logs into the Identity Provider website (e.g. https://customer.okta.com)

  • From the identity provider portal, the user clicks on the Keeper icon

  • The user is redirected to Keeper at the pre-defined "ACS URL" (this can be via "Redirect" or "Post", depending on the IdP configuration).

  • The SAML message from the IdP to Keeper contains a signed assertion that validates the user has successfully authenticated at the IdP. Keeper validates the signed assertion with the IdP's public key and we ensure that the assertion has not been tampered with. Keeper also verifies the message is signed with our public key.

  • SAML Attributes "First", "Last" and "Email" are provided by the IdP to Keeper.

  • Keeper SSO Connect Cloud redirects the user to the vault

  • If the user's device is not recognized, Keeper performs device verification (via Keeper Push or Admin Approval)

  • After successful device verification and key exchange, Keeper provides user with Encrypted Data Key

  • User decrypts their data key locally with their Device Private Key

  • User decrypts their vault with their Data Key

Additional Security Details

To learn more about the Keeper Encryption Model, see the below link:

Security Model Questions and Answers

Q: When an admin approves a new user device via the Keeper Web Console, how is the user’s encrypted data key transferred to the new device? Each device has a unique Elliptic Curve (EC) public/private key pair generated locally on the device. The public key is stored on the server. When the user requests a device approval, the new device public key is sent to the server. The Admin who has "device approval" permissions has the ability to decrypt the user's Data Key during device approval processing. When the admin reviews and approves the device, the user's Data Key (DK) is re-encrypted with the new device's (EC) public key and the encrypted Data Key is stored on the server associated with that user's device, also sent to the new device. The new device decrypts the Data Key with the device's (EC) private key.

Q: Is the data key decrypted in order to encrypt with the new device’s private key? The Admin decrypts the data key in memory and re-encrypts it with the new device's public key inside the Admin Console when performing an approval. After the user signs into SSO, the server verifies the attestation then the encrypted Data Key is provided to the new device. The device then decrypts the Data Key with the local EC private key. Every time the user logs in and verifies the attestation from the IDP, the encrypted key is provided to the device, and decrypted in memory, then used for encrypting/decrypting record keys, folder keys, etc.

Q: Where is the data key when it is in its decrypted state? It's never stored in a decrypted state. The Encrypted Data Key is stored on the cloud, encrypted with the device public keys. So if the user has 10 devices, we are storing 10 encrypted Data Keys, encrypted with each of the device public keys. The re-encryption of the Data Key always takes place locally on the device either by the user, or by the Admin to preserve zero knowledge.

Q: For the Automator approval of a new user device, same question, where is the crypto operation happening for the user's data key? The Automator runs the exact same process, it decrypts the user's Data Key at the time of the request, verifies the attestation, re-encrypts the data key with the new device EC public key, then transfers the encrypted data key to the user's device.

Q: What happens if a user has data encrypted in their vault, but has no available devices to perform the sharing of the user's data key? The Automator and the Admin can always perform a device approval if the user loses all of their devices.

Q: Do new and old user devices both have to be online to add a new device? No, the approval can occur asynchronously.

Q: If the data key is only ever decrypted on device, then it seems like the old device needs to be online to encrypt the data key with the new device’s public key? The entire approval process can be performed in real time or in separate steps. The apps will prompt users upon login for the approval, encrypt the data key with the new public key, and send it up to the server. Here's a video of the process:

Name

Display Name for your SAML Connection

SP Single Sign On URL

Assertion Consumer Service (ACS) Endpoint value from Keeper Admin Console

SP Audience URI

Entity ID from Keeper Admin Console

Name ID format

emailAddress

Subject User Attribute

Email

Request Binding

http post

Authentication Context Class

X509

Signed Response

Signed toggled On

X509 Signing Certificate

SP Cert exported from Keeper Admin Console

Email

unspecified

{{Email}}

First

unspecified

{{DisplayName}}

Admin Console Configuration
Keeper Admin Console
https://keepersecurity.com/console
https://keepersecurity.eu/console
https://keepersecurity.com.au/console
https://govcloud.keepersecurity.us/console
Download the Beyond Identity Authenticator App
https://admin.byndid.com/
Beyond Identity's Documentation.
here
Create a node for Beyond Identity in the Keeper Admin
Configure Beyond Identity for Single Sign-On with SSO Connect™ Cloud
View Beyond Identity Provisioning Settings
Note the highlighted fields and Export SP Cert
Configure SAML Settings for Beyond Identity Integration
Download Beyond Identity Metadata
Click Edit to view the configuration screen
Upload metadata and configure Just-In-Time Provisioning
c1:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname"]
 && c2:[Type == "http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant"]
 => add(store = "_OpaqueIdStore", types = ("http://mycompany/internal/sessionid"), query = "{0};{1};{2};{3};{4}", param = "useEntropy", param = c1.Value, param = c1.OriginalIssuer, param = "", param = c2.Value);
Get-ADFSRelyingPartyTrust
Set-ADFSRelyingPartyTrust -TargetIdentifier <Identifier> -samlResponseSignature MessageAndAssertion
Set-ADFSRelyingPartyTrust -TargetIdentifier 
<Identifier> -EncryptionCertificateRevocationCheck None
Set-ADFSRelyingPartyTrust -TargetIdentifier 
<Identifier> -SigningCertificateRevocationCheck None
Admin Console Configuration
https://keepersecurity.com/api/rest/sso/saml/459561502484
can be found here
Locate the Federation Metadata XML File
Metadata Path
Download the Metadata XML File
Select IDP Type and Upload SAML Metadata
View Settings
Export Metadata
Add Relying Party Trust
Import Keeper Metadata
Enter a Display Name: Keeper SSO Connect Cloud
Choose an access control policy
SAML Logout Endpoints
Configure Claims issuance policy
Relying Party Trusts
Edit Claim Issuance Policy
Add Rule...
Choose Rule Type
Claim Rule Name - Mapping
Issuance Transform Rules
Send Claims Using a Custom Rule
Create Opaque Persistent ID
Transform an Incoming Claim
Create Persistant Name Identifier
Set Outgoing Claim and Name ID Format
Initially select 'Enterprise SSO Login'
<?xml version="1.0" encoding="UTF-8"?>
<md:EntityDescriptor entityID="MySSOApp" xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata">
    <md:IDPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol" WantAuthnRequestsSigned="true">
        <md:KeyDescriptor use="signing">
            <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
                <ds:X509Data>
                    <ds:X509Certificate>MIIDpDCCAoygAwIBAgIGAW2r5jDoMA0GCSqGSIb3DQEBCwUAMIGSMQswCQYDVQQGEwJVUzETMBEG
                        A1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzENMAsGA1UECgwET2t0YTEU
                        MBIGA1UECwwLU1NPUHJvdmlkZXIxEzARBgNVBAMMCmRldi0zODk2MDgxHDAaBgkqhkiG9w0BCQEW
                        DWluZm9Ab2t0YS5jb20wHhcNMTkxMDA4MTUwMzEyWhcNMjkxMDA4MTUwNDEyWjCBkjELMAkGA1UE
                        BhMCVVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiqGcmFuY2lzY28xDTALBgNV
                        BAoMBE9rdGExFDASBgNVBAsMC1NTT1Byb3ZpZGVyMRMwEQYDVQQDDApkZXYtMzg5NjA4MRwwGgYJ
                        KoZIhvcNAQkBFg1pbmZvQG9rdGEuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
                        hr4wSYmTB2MNFuXmbJkUy4wH3vs8b8MyDwPF0vCcjGLl57etUBA16oNnDUyHpsY+qrS7ekI5aVtv
                        a9BbUTeGv/G+AHyDdg2kNjZ8ThDjVQcqnJ/aQAI+TB1t8bTMfROj7sEbLRM6SRsB0XkV72Ijp3/s
                        laMDlY1TIruOK7+kHz3Zs+luIlbxYHcwooLrM8abN+utEYSY5fz/CXIVqYKAb5ZK9TuDWie8YNnt
                        7SxjDSL9/CPcj+5/kNWSeG7is8sxiJjXiU+vWhVdBhzkWo83M9n1/NRNTEeuMIAjuSHi5hsKag5t
                        TswbBrjIqV6H3eT0Sgtfi5qtP6zpMI6rxWna0QIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQBr4tMc
                        hJIFN2wn21oTiGiJfaxaSZq1/KLu2j4Utla9zLwXK5SR4049LMKOv9vibEtSo3dAZFAgd2+UgD3L
                        C4+oud/ljpsM66ZQtILUlKWmRJSTJ7lN61Fjghu9Hp+atVofhcGwQ/Tbr//rWkC35V3aoQRS6ed/
                        QKmy5Dnx8lc++cL+goLjFVr85PbDEt5bznfhnIqgoPpdGO1gpABs4p9PXgCHhvkZSJWo5LobYGMV
                        TMJ6/sHPkjZ+T4ex0njzwqqZphiD9jlVcMR39HPGZF+Y4TMbH1wsTxkAKOAvXt/Kp77jdj+slgGF
                        gRfaY7OsPTLYCyZpEOoVtAyd5i6x4z0c</ds:X509Certificate>
		             </ds:X509Data>
            </ds:KeyInfo>
	      </md:KeyDescriptor>
	      <md:NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress</md:NameIDFormat>
        <md:SingleSignOnService Location="https://sso.mycompany.com/saml2/keepersecurity"
	            Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"/>
        <md:SingleSignOnService Location="https://sso.mycompany.com/saml2/keepersecurity"
	            Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"/>
    </md:IDPSSODescriptor>
</md:EntityDescriptor>

Name

Description

EntityDescriptor

This is the Entity ID, sometimes referred to as "Issuer", and the unique name for your IdP application.

X509Certificate

This is the X509 Certificate, used by Keeper, to validate the signature on the SAML response sent by your Identity Provider.

NameIDFormat

This Defines the name identifier format used when logging into Keeper. Keeper supports the following types of identifiers.

urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress

or

urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified

SingleSignOnService "POST"

This is your identity provider's "POST" binding used as a response to a request from Keeper.

SingleSignOnService "Redirect"

This is your identity provider's "Redirect" binding used as a response to a request from Keeper.

Your IdP User Attributes

Keeper User Attributes

<Email Address>

Email

<First Name>

First

<Last Name>

Last

Admin Console Configuration
Graphic Assets page
can be found here
View Keeper SSO Connect Cloud Provisioning Method
Keeper SSO Connect Cloud Configuration Information
Edit SSO Provisioning Method
Upload your Metadata File
Your SSO Application's Metadata
Initially select 'Enterprise SSO Login'
[ec2-user@xxx ~]$ sudo yum install -y java-17-amazon-corretto-devel
[ec2-user@xxx ~]$ java --version
mkdir automator
cd automator/
wget https://keepersecurity.com/automator/keeper-automator.zip
unzip keeper-automator.zip
mkdir keeper-automator/config
scp -i xxx.pem ssl-certificate.pfx \
  ec2-user@xxx:/home/ec2-user/automator/keeper-automator/config/
echo "my_pfx_password..." > ssl-certificate-password.txt

scp -i xxx.pem ssl-certificate-password.txt \
  ec2-user@xxx:/home/ec2-user/automator/keeper-automator/config/
cd automator/
nohup java -jar keeper-automator.jar &
start "" /B javaw -jar "keeper-automator.jar"
curl https://automator.lurey.com/health
OK
$ keeper shell

My Vault> login [email protected]

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 v16.1.10     |_|

 password manager & digital vault

Logging in to Keeper Commander

SSO user detected. Attempting to authenticate with a master password.
(Note: SSO users can create a Master Password in Web Vault > Settings)

Enter password for [email protected]
Password: 
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>
My Vault> automator create --name="My Automator" --node="Azure Cloud"
                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval
automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"
automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"
automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"
$ keeper shell

My Vault> automator list
288797895952179 My Automator True https://something.company.com 

(find the Name corresponding to your Automator)

My Vault> automator setup "My Automator"
My Vault> automator init "My Automator"
My Vault> automator enable "My Automator"
Create SSL Certificate
Create Certificate
advanced settings
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
Ingress Requirements
linked here
Automator Create
sudo service docker start
sudo systemctl enable docker.service
sudo chmod 666 /var/run/docker.sock
name: keeper-automator
services:
  automator:
    container_name: "automator"
    environment:
      - AUTOMATOR_PORT=443
      - AUTOMATOR_HOST=localhost
      - SSL_MODE=certificate
    restart: on-failure
    image: "keeper/automator:latest"
    ports:
      - 8089:443
    volumes:
      - automatordata:/usr/mybin/config
volumes:
  automatordata:
docker compose pull
docker compose up -d
docker cp ssl-certificate.pfx automator:/usr/mybin/config/
docker cp ssl-certificate-password.txt automator:/usr/mybin/config/
docker compose restart
$ keeper shell

My Vault> login [email protected]
.
.
My Vault>
automator create --name="My Automator" --node="Azure Cloud"
                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval
automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"
automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"
$ curl https://automator.lurey.com/health
OK
docker compose logs -f
docker compose pull
docker compose up -d
Create SSL Certificate
https://docs.docker.com/compose/install/
Installing Docker on Linux
Custom SSL Certificate
Installing Keeper Commander
Ingress Requirements
Automator Create
https://docs.keeper.io/enterprise-guide/keeper-encryption-model
SSO Connect Cloud Encryption Model

Entra ID (Azure AD)

How to configure Keeper SSO Connect Cloud with Microsoft Entra ID (formerly Azure AD) for seamless and secure SAML 2.0 authentication.

Please complete the steps in the Admin Console Configuration section first.

Overview

Keeper is compatible with all Microsoft Azure AD / Entra ID environments for SAML 2.0 authentication and automated provisioning.

  • Keeper applications (including Web Vault, Browser Extension, Desktop App and iOS/Android apps) are 100% compatible with conditional access policies.

  • Keeper supports both commercial (portal.azure.com) and Azure Government Cloud (portal.azure.us) environments.

Azure Setup

Watch the following video to learn more about setting up Azure with SSO Connect Cloud.

Please follow the below steps.

(1) Add the Keeper Enterprise Application

Go to your Azure Admin account at https://portal.azure.com and click on Azure Active Directory > Enterprise Applications. Note: If you already have a Keeper application set up for SCIM Provisioning, you can edit the existing application.

For US Public Sector entities, login to https://portal.azure.us and follow the same steps as outlined in this document.

Enterprise Applications

(2) Click on "New Application" then search for Keeper and select "Keeper Password Manager & Digital Vault".

(3) Click "Create" to create the application.

(4) Click on the "Set up single sign on" then click "SAML"

The SSO provisioning method MUST be configured on the target node prior to exporting the SAML metadata. See below:

  1. Open the Keeper Admin Console and navigate to the "Admin" screen.

  2. Select the target node and click on the "Provisioning" tab.

  3. Choose "SSO Connect Cloud" and click Next.

  4. Input the required configuration information and click Next.

  5. The Metadata Export button will then appear for download.

(6) Upload the Metadata file into the Azure interface by selecting the "Upload metadata file" button.

and selecting the file just downloaded from the Keeper admin console and pressing the Add button.

(7) Azure will open up the SAML configuration screen.

The red error on the missing "Sign on URL" field is expected.

To fix the error, copy the URL from the "IDP Initiated Login Endpoint" from the Admin Console SSO Cloud instance "view" screen, and paste it into the "Sign on URL" field.

Copy-paste the "IdP Initiated Login Endpoint" to "Sign on URL"

Single Logout Service Endpoint ("SLO")

This is the URL endpoint at Keeper to which your identity provider will send logout requests. Single Logout is optional and this is something you configure at your identity provider.

Logout Url

For control over Keeper-initiated Single Logout behavior with the identity provider, see this page.

By default, Keeper will force a logout session with Entra/Azure after logging out. If you would like to remove this behavior, edit the Azure metadata file before uploading to Keeper and remove the SingleLogoutService line. For security reasons, we recommend keeping this in place.

SingleLogoutService

(8) Click on Save then close the window with the SAML configuration.

(9) After saving, you'll be asked to test the configuration. Don't do this. Wait a couple seconds then reload the Azure portal page on the web browser. Now, there should be a certificate section that shows up in the "SAML Signing Certificate" area.

Click on "Download" under the Federation Metadata XML section:

Download Metadata file

(10) Upload the Metadata file into the Keeper Admin Console

In the Admin Console, select Azure as the Identity Provider type and import the Federation Metadata file saved in the previous step the SAML Metadata section.

Upload SAML Metadata into Keeper

(11) Edit User Attributes & Claims

Under the User Attributes section, Azure will automatically create claims for User ID, First, Last and Email.

We recommend deleting the 4 claims in the "Additional Claims" section since they are not needed.

Delete Additional Claims

In your environment, if your user.userprincipalname (UPN) is not the same as the users actual email address, you can edit the Email claim and change it to user.mail as the value for the Email attribute.

ForceAuthn Setting

In the Keeper Admin Console, the option to enforce a new login session with the identity provider is available. When ForceAuthn="true" is set in the SAML request, the Service Provider (Keeper) is telling the IdP that even though the user is already authenticated, they need to force a new authenticated session. This may be a desired behavior depending on your security policies and end-user environment.

Optional ForceAuthn Setting

Certificate Renewal Reminder

Entra ID / Azure AD SAML signing certificates will expire after one year.

Ensure that you set yourself an annual calendar reminder to update the SAML certificate prior to expiration, or your Keeper users will not be able to login until it is updated.

For instructions on renewing the certificate, see the Certificate Renewal page.

User Provisioning

Users can be provisioned to the Keeper application through the Azure portal using manual or automated provisioning.

Manual

If only specific users or groups will be assigned to Keeper Password Manager the following setting will need to be changed. In your Azure console, navigate to Azure Active Directory > Enterprise Applications > Keeper Password Manager & Digital Vault and select Properties.

Properties

Change the User assignment required to Yes and then save. This will ensure only the user and groups assigned to the application will be able to use it.

User Assignment Settings

On the Users and groups section select the users and/or groups that are to be provisioned to the Keeper application.

Assign Users and Groups

Automated provisioning with SCIM

For Step-By-Step instructions, please refer to this URL: https://docs.keeper.io/enterprise-guide/user-and-team-provisioning/azure-ad-provisioning-scim

Move existing users/initial admin to SSO authentication

Users created in the root node (top level) will need to be migrated to the sub node that the SSO integration was configured on. If users remain in the root node, they will be prompted for the master password when accessing the vault and/or admin console.

An admin cannot move themselves to the SSO enabled node. It requires another admin to perform this action.

Vault Login with Email

For any reserved domain that has just-in-time provisioning enabled, the user can simply type in their email address on the Vault login screen and they will be routed to the correct SSO provider. From here, the user can create their vault or login to an existing vault.

Vault Login with Email

Vault Login with Enterprise Domain

If the domain is not reserved, the user can login into the Keeper vault initially by selecting the "Enterprise SSO" pull down and inputting in the Enterprise Domain configured on the SSO integration. The user may get prompted to confirm by entering in the master password if they were recently moved from a non-SSO node to the SSO node.

Initially select 'Enterprise SSO Login'

Once the user has authenticated with SSO, they only need to use their email address moving forward to initiate SSO authentication.

If typing in the email address and clicking Next does not route the user to the desired SSO, ensure that just-in-time provisioning is enabled in the Keeper SSO configuration and ensure that your email domain is reserved by Keeper. More information regarding routing and domain reservation can be found here.

IdP-Initiated Login

Keeper supports IdP-initiated login with Azure. Users can simply visit their Apps Dashboard at:

https://myapplications.microsoft.com/ This will load their assigned Keeper application and the user can click the icon.

Azure IdP-initiated Login from the Microsoft Apps Dashboard
Logo
Logo

Azure App Services

Deployment with Azure App Services

Overview

This guide provides step-by-step instructions to instantiate Keeper Automator as a Web App within Azure App Services. For environments such as GCC High and DoD, this service is available for hosting the Automator.

(1) Create an Automator Config key

Open a command line interface and generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:

Generate a Key

openssl rand -base64 32
[Byte[]]$key = New-Object Byte[] 32; [System.Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($key); [System.Convert]::ToBase64String($key)

Save the resulting value produced by this command for Step (6).

Example of generated key value in Mac/Linux
Example of generated key value in PowerShell

(2) Create a App Services Web App

From the Azure portal, create a new Web App by selecting App Services in the search bar and then selecting Create + Web App

  • Select or create a new Resource Group

  • Set the Instance Name

  • Set Publish to "Container"

  • Set Operating System to "Linux"

  • Select the region where you would like the service hosted

  • Select your Linux Plan or create a new plan. Pricing plan at a minimum should be Premium V3 P0V3, but will also be dependent on the end user environment

  • Proceed to the Container section

(3) Setup Container Details

In the Container section, make the following selections:

  • Image Source: "Other container registries"

  • Access Type: "Public"

  • Registry server URL: "https://index.docker.io" (prefilled by default)

  • Image and tag: keeper/automator:latest

  • Proceed to the Monitor + secure section

(4) Setup WebApp Monitoring

  • Select "Enable Application Insights": Yes

  • Select or create a new Application Insights workspace

  • Proceed to the Review + create section

(5) Create WebApp

Click "Create"

After a few minutes, the web app will be created and automatically start up.

Clicking on "Go to Resource" will take you to the container environment.

Make note of the Default domain value. This will be needed to setup and initialize the Automator service

(6) Configure the WebApp

Go to the Configuration section and select "New application setting"

Or your environment variables settings may be in a different section of the UI under Environment variables.

Add the following application settings:

  • Create the below environment variables with their respective values:

    • AUTOMATOR_CONFIG_KEY -> "value from Step 1 above of the setup guide"

    • AUTOMATOR_PORT -> 8089

    • SSL_MODE -> none

    • WEBSITES_PORT -> 8089

  • Click Apply

(7) Set up Diagnostics

Select Diagnostic settings and then select "+ Add diagnostic setting"

  • Give the diagnostic setting a name.

  • Select "App Service Console logs"

  • Select "App Service Application logs"

  • Select "Send to Log Analytics workspace"

    • Select or setup a new Log Analytics workspace

(8) Set up Logs

Select Logs from the main menu. Click the "X" to close the Queries window.

Switch from Simple mode to KQL mode to add new queries:

KQL query to see the Docker deployment and startup logs:

AppServicePlatformLogs
project TimeGen=substring(TimeGenerated, 0, 19), Message
sort by TimeGen desc

KQL query to see the application error logs:

AppServiceConsoleLogs
project TimeGen=substring(TimeGenerated, 0, 19), ResultDescription
sort by TimeGen desc

(9) Set up App Service logs

Select App Service Logs from the main menu under the Monitoring section. Then select File System under Application logging and set a retention per user's preference

Click Save

(10) View Log stream

Select Log Stream from the main menu under the Overview section to verify the Automator service is connected and logging correctly

(11) Configure Health Check

Select Health check from the main menu under the Monitoring section. Then Enable the health check function and set the Path value to "/health". Click Save to save the configuration, and Save again to confirm changes.

(12) Configure Access Restrictions

In the Networking section you can setup simple access rules or configure Azure Front Door.

Select Networking from the main menu and click on "Enabled with no access restrictions"

Under Access Restrictions, select "Enabled from select virtual networks and IP addresses" and "Deny" unmatched rule action. Click +Add to add inbound access rules.

Under Add Rule, add the inbound firewall rules. You should restrict traffic to the Keeper published IP addresses marked as "Network Firewall Setup" for your respective region per the page below

Ingress Requirements

Click Add Rule

Click Save to save the configurations

(13) Login to Keeper Commander

Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.

On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 v16.x.xxx    |_|

 password manager & digital vault

Logging in to Keeper Commander
Enter password for [email protected]
Password: ********************
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>

(14) Create the Automator

Create the Automator using a series of commands, starting with automator create

My Vault> automator create --name "My Automator" --node "Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. This is the Default Domain value from Step 5.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<Default domain> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For external health checks, you can use the below URL:

https://<server>/health

Example curl command:

$ [rainer@iradar keeper]$ curl -vk https://keeperapprovalautomator.azurewebsites.net/health
* About to connect() to keeperapprovalautomator.azurewebsites.net port 443 (#0)
*   Trying 40.112.243.106...
* Connected to keeperapprovalautomator.azurewebsites.net (40.112.243.106) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*       subject: CN=*.azurewebsites.net,O=Microsoft Corporation,L=Redmond,ST=WA,C=US
*       start date: Oct 31 23:08:36 2023 GMT
*       expire date: Jun 27 23:59:59 2024 GMT
*       common name: *.azurewebsites.net
*       issuer: CN=Microsoft Azure TLS Issuing CA 01,O=Microsoft Corporation,C=US
> GET /health HTTP/1.1
> User-Agent: curl/7.29.0
> Host: keeperapprovalautomator.azurewebsites.net
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 2
< Content-Type: text/plain
< Date: Sat, 23 Mar 2024 05:08:13 GMT
< Server: Jetty(11.0.20)
< Strict-Transport-Security: max-age=31622400; includeSubDomains
<
* Connection #0 to host keeperapprovalautomator.azurewebsites.net left intact

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Kubernetes Service

Installation of Keeper Automator as a Kubernetes service

This guide provides step-by-step instructions to publish Keeper Automator as a Kubernetes service.

Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.

(1) Set up Kubernetes

Installation and deployment of Kubernetes is not the intent of this guide, however a very basic single-node environment using two EC2 instances (Master and Worker) without any platform dependencies is documented here for demonstration purposes. Skip to Step 2 assuming you already have your K8 environment running.

Set up Docker

Kubernetes requires a container runtime, and we will use Docker.

sudo yum update -y
sudo yum install -y docker
sudo systemctl enable docker
sudo systemctl start docker

Install kubeadm, kubelet, and kubectl

These packages need to be installed on both master and worker nodes. The example here is using AWS Amazon Linux 2 instance types.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.28/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Initialize the Master Node

On the machine you want to use as the master node, run:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr argument is required for certain network providers. Substitute the IP range you want your pods to have.

After kubeadm init completes, it will give you a command that you can use to join worker nodes to the master. Make a note of the response and initialization code for the next step.

Set up the local kubeconfig:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a Pod Network

You need to install a Pod network before the cluster will be functional. For simplicity, you can use flannel:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Join Your Worker Nodes

On each machine you want to add as a worker node, the command below with the initialization code.

sudo kubeadm join [your code from kubeadm init command]

Note that port 6443 must be open between the worker and master node in your security group.

After the worker has been joined, the Kubernetes cluster should be up and running. You can check the status of your nodes by running kubectl get nodes on the master.

(2) Create a Kubernetes Secret

The SSL certificate for the Keeper Automator is provided to the Kubernetes service as a secret. To store the SSL certificate and SSL certificate password (created from the SSL Certificate guide), run the below command:

kubectl create secret generic certificate-secret --from-file=ssl-certificate.pfx --from-file=ssl-certificate-password.txt

(3) Create a Manifest

Below is a manifest file that can be saved as automator-deployment.yaml. This file contains configurations for both a Deployment resource and a Service resource.

  • The deployment resource runs the Keeper Automator docker container

  • The SSL certificate and certificate password files are referenced as a mounted secret

  • The secrets are copied over to the pod in an initialization container

  • The Automator service is listening on port 30000 and then routes to port 443 on the container.

  • In this step, we are only deploying a single container (replicas: 1) so that we can configure the container, and we will increase the number of replicas in the last step.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: automator-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: automator
  template:
    metadata:
      labels:
        app: automator
    spec:
      initContainers:
        - name: init-container
          image: busybox
          command: ['sh', '-c', 'cp /secrets/* /usr/mybin/config']
          volumeMounts:
            - name: secret-volume
              mountPath: /secrets
            - name: config-volume
              mountPath: /usr/mybin/config
      containers:
        - name: automator
          image: keeper/automator:latest
          ports:
            - containerPort: 443
          volumeMounts:
            - name: config-volume
              mountPath: /usr/mybin/config
      volumes:
        - name: config-volume
          emptyDir: {}
        - name: secret-volume
          secret:
            secretName: certificate-secret
            items:
              - key: ssl-certificate.pfx
                path: ssl-certificate.pfx
              - key: ssl-certificate-password.txt
                path: ssl-certificate-password.txt
---
apiVersion: v1
kind: Service
metadata:
  name: automator-service
spec:
  type: NodePort
  ports:
  - port: 443
    targetPort: 443
    protocol: TCP
    nodePort: 30000
  selector:
    app: automator

(4) Deploy the Service

kubectl apply -f automator-deployment.yaml

The service should start up within 30 seconds.

(5) Check Service Status

Confirm the service is running through a web browser (note that port 30000 must be opened from whatever device you are testing). In this case, the URL is: https://automator2.lurey.com:30000/api/rest/status

For automated health checks, you can also use the below URL:

https://<server>/health

Example:

$ curl https://automator2.lurey.com:30000/health
OK

Now that the service with a single pod is running, you need to integrate the Automator into your environment using Keeper Commander.

(6) Configure the Pod with Commander

Keeper Commander is required to configure the pod to perform automator functions. This can be run from anywhere.

On your workstation, install Keeper Commander CLI. The installation instructions including binary installers are here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 vxx.x.xx     |_|

Logging in to Keeper Commander

SSO user detected. Attempting to authenticate with a master password.
(Note: SSO users can create a Master Password in Web Vault > Settings)

Enter password for [email protected]
Password: 
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

My Vault> automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. So let's do that next.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://automator2.lurey.com:30000 --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Next, send other IdP metadata to the Automator:

automator init "My Automator"

Enable the Automator service

automator enable "My Automator"

At this point, the configuration is complete.

(7) Securing the Service

We recommend limiting network access to the service from Keeper's servers and your own workstation. Please see the Ingress Requirements section for a list of Keeper IP addresses to allow.

(8) Test the Automator Service

To ensure that the Automator service is working properly with a single pod, follow the below steps:

  • Open a web browser in an incognito window

  • Login to the Keeper web vault using an SSO user account

  • Ensure that no device approvals are required after successful SSO login

(9) Update the Pod configuration

At this point, we are running a single pod configuration. Now that the first pod is set up with the Automator service and configured with the Keeper cloud, we can increase the number of pods.

Update the "replicas" statement in the YAML file with the number of pods you would like to run. For example:

replicas: 3

Then apply the change:

kubectl apply -f automator-deployment.yaml

With more than one pod running, the containers will be load balanced in a round-robin type of setup. The Automator pods will automatically and securely load their configuration settings from the Keeper cloud upon the first request for approval.

Troubleshooting the Automator Service

The log files running the Automator service can be monitored for errors. To get a list of pods:

kubectl get pods

Connect via terminal to the Automator container using the below command:

kubectl exec -it automator-deployment-<POD> --container automator -- /bin/sh

The log files are located in the logs/ folder. Instead of connecting to the terminal, you can also just tail the logfile of the container from this command:

kubectl exec -it automator-deployment-<POD> --container automator -- tail -f /usr/mybin/logs/keeper-automator.log

Google Cloud with GCP Cloud Run

Running the Keeper Automator service on the Google Cloud platform with Cloud Run

Overview

This guide provides step-by-step instructions to run the Keeper Automator service on Google Cloud, specifically using the GCP Cloud Run service. The Automator is also protected by the Google Armor service in order to restrict access to Keeper's infrastructure IPs.

(1) Create a Project

From the Google Cloud console () create a new project.

Then click "Select Project" on this new project.

(2) Start the Cloud Shell

For this documentation, we'll use the Google Cloud Shell from the web interface. Click to activate the Cloud Shell or install this on your local machine.

  • Note the Project ID, which in this case is keeper-automator-439714. This Project ID will be used in subsequent commands.

(3) Link a Billing Account

If you haven't done so, you must link a valid Billing account to the project. This is performed in the Google Cloud user interface from the Billing menu.

(3) Create an Automator Config key

From the Cloud Shell, generate a 256-bit AES key in URL-encoded format:

Example key: 6C45ibUhoYqkTD4XNFqoZoZmslvklwyjQO4ZqLdUECs=

Save the resulting Key in Keeper. This will be used as an environment variable when deploying the container. This key ensures that ephemeral containers will be configured at startup.

(4) Enable the Artifact Registry

(5) Select a Region

You need to select a region for the service to run. The available region codes can be found by using the following command:

For this example, we will use us-east1

(6) Create Artifact Repository for the Automator service

Run the below 2 commands, replacing "us-east1" with your preferred value from Step 5

(7) Upload the Automator container to the Artifact Registry

Create a file called cloudbuild.yaml that contains the following content, ensuring to replace the string "us-east1" with your preferred location from Step 5. Leave all other content the same.

  • Replace us-east1 with your preferred location from

Upload this file through the Cloud Shell user interface, or create the text file in the cloud shell.

From the Cloud Shell, execute the following:

Then execute the build:

This will sync the latest Automator container to your Google Artifact Registry.

(8) Deploy the Automator service

The following command will deploy the Keeper Automator service to Google Cloud Run from your Artifact Registry. This service is limited to internal access and load balancers only.

Note the following:

  • [PROJECT_ID] needs to be replaced by your Project ID as found in Step 2

  • XXX is replaced with the configuration key that you created in Step 3 above.

  • AUTOMATOR_PORT tells the container to listen on port 8080

  • SSL_MODE allows the SSL connection to terminate with the load balancer

  • DISABLE_SNI_CHECK allows the request to complete behind the load balancer

  • The mininum number of instances is 1, which is acceptable in most environments.

  • If min/max is not set, the service will drop to zero instances and startup on the first request

(9) Create Managed Certificate

The Keeper system is going to communicate with your Automator service through a publicly routable DNS name. In this example, I'm using gcpautomator.lurey.com. In order to set this up, you need to first create a managed SSL certificate. The command for this is below.

  • Replace gcpautomator.lurey.com with your desired name

(10) Create a Serverless Network Endpoint Group

The next command links the Cloud Run service to a Google Cloud Load Balancer.

  • Replace us-east1 with the region of your Cloud Run service from .

(11) Create a backend service that will use the NEG

This creates a backend service that links to the Cloud Run service:

(12) Attach the NEG to the backend service

This attaches the NEG to the backend service.

  • Replace us-east1 with the desired location specified in

(13) Create a URL map that directs incoming traffic to the backend service

(14) Create the HTTPS target proxy and map the Automator certificate

(15) Reserve a static IP address and assign DNS entry

Get the IP address and note for later:

The IP address must be mapped to a valid DNS.

In your DNS provider, set up an A-record pointing to the IP.

Example A-Record Configuration

Type
Name
Value
TTL
  • This step is important. Ensure that the desired domain name is pointing to the IP address provided. This step must be performed in your DNS provider directly.

(16) Create a Global Forwarding Rule

Create a global forwarding rule to direct incoming requests to the target proxy:

(17) Lock down access to specific IPs

The Keeper Automator service should be restricted to only the necessary IPs as discussed on the page.

Let's create a Cloud Armor Security Policy to restrict access to certain IP addresses

In this step, we will attach IPs Keeper's US Data Center as found . Additional rules can be created as you see fit.

  • We recommend adding your external IP to this list, so that you can test the Automator service

We will also add a default "deny" rule to restrict other traffic:

Finally, attach the Cloud Armor security policy to the backend service

At this point, the Automator service should be running and the service should be exposed only to the Keeper infrastructure.

The next step is to finish the configuration with the Keeper Commander utility.

(18) Login to Keeper Commander

Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.

On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here: After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

(19) Create the Automator

Create the Automator using a series of commands, starting with automator create

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

The output of the command will display the Automator settings, including metadata from the identity provider.

Note that the "URL" is not populated yet. This will be populated with the automator URL.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

  • NOTE: Replace gcpautomator.lurey.com with the domain name you created in

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

Initialize the Automator with the new configuration

Enable the service

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.


Updating the Container

To update the container in Google when there is a new version available from Keeper, run the following commands:

  • Repeat

  • Repeat

Need help?

If you need assistance, please email [email protected] or open a .

Logo
openssl rand -base64 32
gcloud services enable run.googleapis.com artifactregistry.googleapis.com
gcloud run regions list
gcloud artifacts repositories create keeper-automator-repo \
    --repository-format=docker \
    --location=us-east1 \
    --description="Artifact registry for Keeper Automator service"
gcloud auth configure-docker us-east1-docker.pkg.dev
cloudbuild.yaml
steps:
  - name: 'gcr.io/cloud-builders/docker'
    args: ['pull', 'keeper/automator:latest']
  - name: 'gcr.io/cloud-builders/docker'
    args: [
      'tag',
      'keeper/automator:latest',  
      'us-east1-docker.pkg.dev/$PROJECT_ID/keeper-automator-repo/keeper-automator:latest'
    ]
  - name: 'gcr.io/cloud-builders/docker'
    args: [
      'push',
      'us-east1-docker.pkg.dev/$PROJECT_ID/keeper-automator-repo/keeper-automator:latest'
    ]
images:
  - 'us-east1-docker.pkg.dev/$PROJECT_ID/keeper-automator-repo/keeper-automator:latest'
gcloud services enable cloudbuild.googleapis.com
gcloud builds submit --config cloudbuild.yaml
gcloud run deploy keeper-automator \
    --image us-east1-docker.pkg.dev/[PROJECT ID]/keeper-automator-repo/keeper-automator:latest \
    --platform managed \
    --region us-east1 \
    --allow-unauthenticated \
    --ingress internal-and-cloud-load-balancing \
    --min-instances 1 \
    --max-instances 2 \
    --set-env-vars "AUTOMATOR_CONFIG_KEY=XXX,AUTOMATOR_PORT=8080,DISABLE_SNI_CHECK=true,SSL_MODE=none"
gcloud compute ssl-certificates create automator-compute-certificate \
    --domains gcpautomator.lurey.com \
    --global
gcloud compute network-endpoint-groups create keeper-automator-neg \
    --region us-east1 \
    --network-endpoint-type=serverless \
    --cloud-run-service keeper-automator
gcloud compute backend-services create keeper-automator-backend \
    --global \
    --protocol HTTP
gcloud compute backend-services add-backend keeper-automator-backend \
    --global \
    --network-endpoint-group keeper-automator-neg \
    --network-endpoint-group-region us-east1
gcloud compute url-maps create keeper-automator-url-map \
    --default-service keeper-automator-backend
gcloud compute target-https-proxies create keeper-automator-target-proxy \
    --url-map keeper-automator-url-map \
    --ssl-certificates automator-compute-certificate
gcloud compute addresses create keeper-automator-ip --global
gcloud compute addresses list

A

gcpautomator.lurey.com

xx.xx.xx.xx

60

gcloud compute forwarding-rules create keeper-automator-forwarding-rule \
    --global \
    --target-https-proxy keeper-automator-target-proxy \
    --ports 443 \
    --address keeper-automator-ip
gcloud compute security-policies create allow-specific-ips-policy --description "Allow specific IPs"
gcloud compute security-policies rules create 1000 \
    --security-policy allow-specific-ips-policy \
    --src-ip-ranges 54.208.20.102,34.203.159.189 \
    --action allow
gcloud compute security-policies rules create 2000 \
    --security-policy allow-specific-ips-policy \
    --action deny-404 \
    --src-ip-ranges '*'
gcloud compute backend-services update keeper-automator-backend \
    --global \
    --security-policy allow-specific-ips-policy
$ keeper shell

My Vault> login [email protected]

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 v16.x.xxx    |_|

 password manager & digital vault

Logging in to Keeper Commander
Enter password for [email protected]
Password: ********************
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>
My Vault> automator create --name "My Automator" --node "Azure Cloud"
                    Automator ID: XXXX
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval
automator edit --url https://gcpautomator.lurey.com --skill=team --skill=team_for_user --skill=device "My Automator"
automator setup "My Automator"
automator init "My Automator"
automator enable "My Automator"
https://console.cloud.google.com
Step 5
Step 5
Step 5
Ingress Requirements
in this page
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
Step 15
Step 7
Step 8
support ticket
Create a New Project
Start the Google Cloud Shell
Automator Create
Setting Up Azure with SSO Connect Cloud

Google Workspace User and Group Provisioning with Cloud Function

Step by Step guide to automatically provisioning Users and Groups from Google Workspace using a Cloud Function

Overview

This document describes how to automatically provision users from Google Workspace to Keeper using a Google Cloud Function, which includes the provisioning of Users, Groups and user assignments. User and Team Provisioning provides several features for lifecycle management:

  • You can specify which Google Groups and/or users are provisioned to Keeper

  • Matching of Groups can be performed by Group name or Group email

  • Google Groups assigned to Keeper are created as Keeper Teams

  • Keeper Teams can be assigned to Shared Folders in the vault

  • New users added to the group are automatically invited to Keeper

  • Group and user assignments are applied every sync

  • When a user is de-provisioned, their Keeper account will be automatically locked

  • The process is fully cloud-based. No on-prem infrastructure or services are required.

  • Processing can be performed on your desired scheduler or on-demand

The setup steps in this section allow you to provision users and groups from your Google Workspace account. Setting up this method requires access to several resources:

  • Google Cloud

  • Google Workspace

  • Keeper Admin Console

  • Keeper Vault

  • Keeper Secrets Manager

Keeper Secrets Manager is used in this implementation to perform the most secure method of integration between Google and Keeper, ensuring least privilege. If you don't use Keeper Secrets Manager, please contact the Keeper customer success team.

STEP 1: Create a Google Cloud Project

Login to Google Cloud and create a project or chose an existing project. The project name can be "Keeper SCIM Push" or whatever you prefer.

Create a New Google Cloud Project

STEP 2: Enable the Admin SDK API

  • In the APIs & Services click +ENABLE APIS AND SERVICES

  • In the Search for APIs & Services enter Admin SDK API

  • Click ENABLE

Enable APIs and Services
Enable Admin SDK API

STEP 3: Create a Service Account

The service account created here will be used to access the Google Workspace user and group information.

  • In the IAM and Admin menu select Service accounts

  • Click +CREATE SERVICE ACCOUNT with suggested service account name: keeper-scim

Create Service Account

For newly created service account click Actions/dots and select Manage Keys

Create Keys and credentials.json

Click ADD KEYS -> Create New Key. Choose JSON key type then CREATE

A JSON file with service account credentials will be downloaded to your computer

Create new key
Select JSON format

Rename this file to credentials.json and add this file as attachment to your Keeper configuration record that was created in the Setup Steps above.

Save as credentials.json

STEP 4: Copy the Client ID

Navigate to your Service Account and select DETAILS tab > Advanced Settings

In the Domain-wide delegation section copy the Client ID. You will need to grant this Client ID access to the Google Workspace Directory in the next step.

Copy the Client ID

STEP 5: Authorize Service Account on Google Workspace

In the Google Workspace Panel (https://admin.google.com):

  • Navigate to Security -> API controls

  • Under the Domain wide delegation click MANAGE DOMAIN WIDE DELEGATION

  • Click Add new in API Clients

  • Paste the Client ID (copied from previous step)

Paste the following text into OAuth scopes (comma-delimited)

https://www.googleapis.com/auth/admin.directory.group.readonly,https://www.googleapis.com/auth/admin.directory.group.member.readonly,https://www.googleapis.com/auth/admin.directory.user.readonly
Add a new client ID

Click AUTHORIZE - These scopes grant Service Account read-only access to Google Workspace Directory Users, Groups and Membership.

STEP 6: Retrieve the Primary Email

  • In Google Workspace (https://admin.google.com), navigate to Account -> Account settings

  • Copy the Primary admin email into the clipboard (upper right area) for use in the next step.

Get the primary admin email

STEP 7: Create a Shared Folder in your Keeper Vault

In your Keeper Vault, create a new Shared Folder. This folder can be named anything, for example "Google SCIM Push". The user and record permissions for this folder can be set any way you prefer.

Create New Shared Folder

STEP 8: Create a Secrets Manager Application

Assuming that you have Keeper Secrets Manager enabled and activated for this vault, click on Secrets Manager from the left side and then select Create Application.

Create Application

Call the Application name "Google SCIM Push" (or whatever you prefer) and click Generate Access Token. This token will be discarded and not used in this scenario.

Generate Access Token

Next, select the "Google SCIM Push" application from the list, and click on Edit then Add Device.

Edit Application
Add Device

Select the base64 configuration and download it to your computer.

Save the file to your computer as config.base64.

Save config.base64

STEP 9: Create a SCIM Provisioning Method

From the Keeper Admin Console, go to the Provisioning tab for the Google Workspace node and click "Add Method".

Select SCIM and click Next.

SCIM Configuration in Keeper

Click on "Create Provisioning Token"

Create Provisioning Token

The URL and Token displayed on the screen will be used in the next step. Save the URL and Token in a file somewhere temporarily and then click Save.

Save SCIM URL and Token

Make sure to save these two parameters (URL and Token) and then click Save. These parameters are used in the next step.

STEP 10: Create a Keeper Record in the Shared Folder

Inside the Shared Folder created in step 7, create a Keeper record that contains the following fields:

Field
Value

Login

Google Workspace admin email

Password

SCIM Token generated from Step 9 above

Website Address

SCIM URL generated from Step 9 above

credentials.json

File attachment from Step 3 with Google Service Account credentials

SCIM Group

Multi-line custom text field containing a list of all groups to be provisioned. The names can either be Group Email or Group Name.

All Groups and users within the specified Groups will be provisioned to Keeper.

Keeper Vault Record

You can specify either the Group Email address or the Group Name in the list of groups. Keeper will match either value and provision all associated users and groups.

The Group Name and Group Email is CASE SENSITIVE

At this point, the configuration on Keeper is complete. The remaining steps are performed back on the Google Cloud console by setting up a Cloud Function.

STEP 11: Create the Google Cloud Function

From the Google Cloud console, open Cloud Functions and then click CREATE FUNCTION.

Create Function

Under Basics:

  • Select environment of "2nd gen"

  • Select Function name of keeper-scim-push

  • Select your preferred region and note this for later

  • Trigger is HTTPS

  • Authentication set to Require authentication

Under Advanced -> Runtime:

  • Memory allocated: 256MiB

  • CPU: 0.333

  • Timeout: 120 seconds

  • Concurrency: 1

  • Autoscaling min: 0

  • Autoscaling max: 1

  • Runtime service account: select

  • Under Runtime service account, select the Default compute service account

If the Default compute service account does not exist yet, select a different account temporarily then go back and edit the service account after saving.

Below is an example full configuration:

Runtime Settings

In the Runtime environment variables:

Create two variables:

  • Set Name 1 to KSM_CONFIG_BASE64 and Value 1 to the contents of the KSM configuration file generated in Step 8

  • Set Name 2 to KSM_RECORD_UID and Value 2 to the record UID created in the vault in Step 10.

You can find the Record UID by clicking on the (info) icon from the Keeper vault record. Click on the Record UID to copy the value.

Runtime environment variables

Click on CONNECTIONS and select "Allow internal traffic only"

Allow internal traffic only

Scroll down and click NEXT to upload the Cloud Function source.

Click NEXT

STEP 12: Upload the Cloud Function Source

  • Visit the Keeper Google SCIM Push release page: https://github.com/Keeper-Security/ksm-google-scim/releases

  • Download the source.zip file and save it to your computer

Cloud Function Code Source
  • Select Runtime of Go 1.21

  • Select Source code of Zip Upload

  • Type Entry point of GcpScimSyncHttp

  • Zip upload destination bucket: Create a bucket with any name you choose, using the default bucket permissions (not public).

  • Zip file: upload the source.zip file saved from the above step

Click DEPLOY to create the Cloud Function. After a few minutes, the function will be created and published.

The function is private and requires authentication, so the next step is creating a Cloud Scheduler.

STEP 13: Copy the Cloud Function URL

From the Cloud Function screen, copy the URL as seen below:

Copy Cloud Function URL

STEP 14: Create the Cloud Scheduler

From the Google Cloud console, search for Cloud Scheduler and open it.

Cloud Scheduler
  • Click SCHEDULE A JOB

Define the schedule:

  • Set any description, such as "Keeper SCIM Push for Google Workspace"

  • Set the frequency, for example 0 * * * * for running once per hour

  • Set the Timezone according to your location

  • Set the Target type to HTTP

  • Set the URL to the Cloud Function URL copied from Step 13 above

  • Set the HTTP method to GET

  • Set the Auth Header to Add OIDC token

  • Set the Service account to Default compute service account

  • Click CONTINUE then CREATE

STEP 15: Test the Scheduler

On the Scheduler Jobs screen, the job will now be listed. To force execution, click on the overflow menu on the right side and select Force run.

This will execute the Cloud Function immediately.

If successful, the status of last execution will show success:

Scheduler Success

To ensure that Keeper received the sync information, login to the Keeper Admin Console. You will see a list of any pending / invited users, teams and team assignments.

Step 16: Delete Local Files

Once the process is working successfully, delete all local files and secrets created during this process.

IMPORTANT: Delete all local or temporary files on your computer, such as:

  • config.base64 file

  • credentials.json file

  • SCIM tokens

  • Any other screenshots or local files generated in this process

Destructive Operations

By default, "unmanaged" teams and team assignments in the Keeper Admin Console will not be deleted during the sync process. However, if your preferred method of syncing is to delete any unmanaged teams or team assignments, you can simply create a custom field in the Keeper record with a particular value.

"Destructive" Field Value
Description

-1

Nothing is deleted on the Keeper side during sync

0 (Default)

Only SCIM-controlled Groups and Membership can be deleted during sync. (Default Setting)

1

Any manually created or SCIM-controlled Groups and Memberships can be deleted during sync.

Debug Logging

The Keeper record can be modified to create verbose logs in the Google Cloud Function logs.

Verbose Field Value
Description

0 (Default)

No logging

1

Verbose logging enabled

Example of Verbose and Destructive Settings in Keeper Record

Important Syncing Notes:

  • Keeper performs exact string matches on the Group Name or Group Email address when performing the Cloud Function provisioning. The group name and email is case sensitive.

  • Users in an invited state are not added to assigned teams until the user creates their vault and the Keeper administrator logs in to the Admin Console. Team membership can also be performed when another member of the team logs in to the vault. Clicking "Sync" from the Admin Console will also perform the additions.

  • Some operations such as the creation of Teams can only occur upon logging into the Keeper Admin Console, or when running the Keeper Automator service. This is because encryption keys need to be generated.

  • For large deployments, we recommend setting up the Keeper Automator service to automate and streamline the process of device approvals, user approvals and team approvals.

  • When you would like to add new Groups, simply add them to the list inside the Keeper vault record as described in Step 10. Keeper will search on either Group email or Group name when identifying the target.

  • Nested groups in Google Workspace will be flattened when syncing to Keeper. Users from the nested groups are added to the parent group on the Keeper side.

Updating the Cloud Function Source

When new versions of the Cloud Function are created, updating the code is very simple:

  • Download a new source.zip file from the Releases page of the ksm-google-scim Github repo

  • Navigate to the Cloud Functions area of Google Cloud

  • Click on the cloud function details and click EDIT

  • Click on Code

  • Under Source code select "ZIP Upload"

  • Select the source.zip file saved to your computer

  • Click DEPLOY

  • Wait a few minutes for the new function to deploy

  • Navigate to Cloud Scheduler

  • Click on Actions > Force Run

Azure Container App

Simple Deployment with Azure Container App

Overview

This guide provides step-by-step instructions to publish Keeper Automator to the Azure Container App service. This provides a simple and straightforward way to host the Automator service in the cloud.

For environments such as Azure Government, GCC High and DoD, use the Azure App Services method, since the Azure Container App service may not be available in those regions.

(1) Create an Automator Config key

Open a command line interface and generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:

Generate a Key

openssl rand -base64 32
[Byte[]]$key = New-Object Byte[] 32; [System.Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($key); [System.Convert]::ToBase64String($key)

Save the resulting value produced by this command for Step (3).

Example of generated key value in Mac/Linux
Example of generated key value in PowerShell

(2) Create a Container Registry

If you do not already have a container registry, you must create one and configure as you see fit. Example below.

(3) Create a Container App

From Azure, create a new Container App.

  • Select or create a new Resource Group

  • Set the Container App Name to "keeperautomator" or whatever you prefer

  • Select "Container Image" as the Deployment Source

  • Select the region where you would like the service hosted

  • Create a new Apps Environment or select an existing environment

  • Click Next : Container >

(4) Setup Container Details

In the "Container" step, make the following selections:

  • Uncheck the "Use quickstart image"

  • Select "Docker Hub or other registries"

  • Select "Public"

  • Select Registry login server as docker.io

  • Set the Image and tag as keeper/automator:latest

  • Skip to "Container resource allocation"

  • For CPU and Memory, 0.5 CPU cores and 1Gi memory is sufficient, but this can be updated based on your volume of new device logins.

  • Create an environment variable called AUTOMATOR_CONFIG_KEY with the value from Step 1 above of the setup guide.

  • Create an environment variable called AUTOMATOR_PORT with the value of 8089

  • Create an environment variable called SSL_MODE with the value of none

  • Click "Next : Ingress >"

(5) Ingress Setup

On the Ingress setup screen, select the following:

  • Enable Ingress

  • Ingress traffic Accepting traffic from anywhere (we'll modify this in a later step)

  • Ingress type HTTP

  • Target port set to 8089

(6) Create Container App

Click "Review + Create" and then click "Create"

After a few minutes, the container app will be created and automatically start up.

Clicking on "Go to Resource" will take you to the container environment.

(7) Customize the Ingress Setup

To restrict communications to the Keeper Automator service, click on the "Ingress" link on the left side of the screen under the "Network" section

  • Click on "Ingress"

  • Select "Allow traffic from IPs configured below, deny all other traffic"

  • Click "Add" to add two of Keeper's IPs and any of your IPs required for testing the service. Ingress Requirements information: Located Here

  • Click Save

If you want to be able to run a health check, then consider adding your own IP address. Find your IP address at https://checkip.amazonaws.com

(8) Set Scaling

  • Select Application > Scale and set min and max replicas to 1

  • Click "Save as a new revision"

(9) Create Volume

  • Select Application > Volumes and click "+ Add"

  • Add a Ephemeral Storage and name it as you wish and click "Add"

  • Then click "Save as a new revision"

(10) Set up Health Probes and Volume Mount

Navigate to the "Application > Revisions and Replicas" section"

Click on "Create new revision"

Click on Application > Revisions and replicas an observe a new revision being activated

  • Next, click on the "Container" tab

  • Click on the container image name link, in this case "keeperautomator" at the bottom

Navigate to Health Probes and enter the following under each section:

Under "Liveness probes":

  • Enable liveness probes

  • Transport: HTTP

  • Path: /health

  • Port: 8089

  • Initial delay seconds: 5

  • Period seconds: 30

Liveness probes

Under "Startup probes":

  • Enable startup probes

  • Transport: HTTP

  • Path: /health

  • Port: 8089

  • Initial delay seconds: 5

  • Period seconds: 30

Under "Volume Mounts" tab:

  • Select "+ Add"

  • Select the volume you created in a previous step and Add Mount Path as /usr/mybin/config

Finish the configuration

  • Click on Save

  • Then click on Create to build the new configuration

  • After a few minutes, the new containers should start up

(11) Retrieve the Application URL

Wait until the revision is done activating.

From the Overview section of the Container App, on the right side is the "Application URL" that was assigned. Copy this and use this Application URL in the next step.

For example, https://craigautomator1.xyx-1234.azurecontainerapps.io

Retrieve the Application URL

(12) Login to Keeper Commander

Keeper Commander is required to perform the final step of Automator configuration. This can be run from anywhere, it does not need to be installed on the server.

On your workstation or server, install Keeper Commander CLI. The installation instructions including binary installers are here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup After Commander is installed, launch Keeper Commander, or from an existing terminal you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 v16.x.xxx    |_|

 password manager & digital vault

Logging in to Keeper Commander
Enter password for [email protected]
Password: ********************
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>

(13) Create the Automator

Create the Automator using a series of commands, starting with automator create with your node name.

My Vault> automator create --name "My Automator" --node "Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. This is the Application URL from Step 8.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For external health checks, you can use the below URL:

https://<server>/health

Example curl command:

$ curl https://craigautomator1.xyz.azurecontainerapps.io/health
OK

Testing the User Experience

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Advanced

Azure Container Apps have many advanced capabilities that are beyond the scope of this documentation. A few of the capabilities are provided below.

Scaling with Multiple Containers

If you would like to have multiple containers running the Keeper Automator service:

  • Click on "Scale and replicas"

  • Click "Edit and deploy"

  • Click on the "Scale" tab

  • Select the min and max number of containers. The minimum should be at least 1.

  • Click Create

  • After a minute, the new version will deploy

  • Run automator setup xxx multiple times (one for each container)

  • Run automator init xxx multiple times (one for each container)

Logging

The Keeper Automator logs can be viewed and monitored using the "Console" or "Log stream" section.

For example, to tail the log file of a running Automator service:

  • Click on Console

  • Select "/bin/sh"

  • Click Connect

  • At the prompt, type: tail -f logs/keeper-automator.log

Advanced Settings

Environment variables can be passed into the Container to turn on/off features of the runtime environment. The variables with their description can be found at the Advanced Settings page.

Azure App Gateway (Advanced)

Deploy Keeper Automator to Azure Container Instances using the Azure App Gateway Service

Overview

This guide provides step-by-step instructions to publish Keeper Automator in a secure VNet with Azure Application Gateway. This method is more advanced than the Azure Container App configuration. If you don't require the use of Azure App Gateway or encrypted SAML requests, it would be best to use the Azure Container App method.

For this method, make sure you already have your SSL Certificate. If not, please follow the steps in the Custom SSL Certificate page.

Instructions

(1) Open the Azure Cloud Shell

Login to portal.azure.com and click on the Cloud Shell icon.

Launch the Azure Cloud Shell

(2) Create a resource group in your preferred region

If the resource group in Azure does not exist yet, create it. The example here uses the eastus region, but make sure to use your region.

az group create --name keeper_automator_rg --location eastus

(3) Create a Storage Account

If the storage account does not exist yet, create it and ensure to use the correct region (useast) and the name of the resource group above. Note: The name you choose (to replace keeperautomatorstorage) needs to be globally unique to azure.

az storage account create -n keeperautomatorstorage -g keeper_automator_rg -l eastus --sku Standard_LRS

(4) Create a File Share

If the file share does not exist yet, create it.

az storage share create --account-name keeperautomatorstorage --name keeperautomatorfileshare

List the current shares:

az storage share list --account-name keeperautomatorstorage

(5) Create a Virtual Network (VNet) and one Subnet for the container

az network vnet create --address-prefixes 10.100.0.0/16 --name keeper_automator_vnet --resource-group keeper_automator_rg --subnet-name keeper_automator_subnet --subnet-prefixes 10.100.2.0/24

(6) Update the Virtual Network with the Service Endpoints

az network vnet subnet update -g keeper_automator_rg -n keeper_automator_subnet --vnet-name keeper_automator_vnet --service-endpoints Microsoft.Storage --delegations Microsoft.ContainerInstance/containerGroups

(7) Retrieve Storage Key

To find a storage key for the account, use the command below. Replace the name of the storage account with your specific name.

az storage account keys list --resource-group keeper_automator_rg --account-name keeperautomatorstorage

Copy the key1 value which will look like this:

"value": "zuVgm9xnQNnxCQzY=5n4Ec6kxhDn2xMZSfpwZnTeqsyGaHd5Abn584mpAP3xamg3rGns4=Fd7FeFsaR6AgtnqW=="

(8) Retrieve Subnet ID

Run the below command to find the Subnet ID:

az network vnet subnet list --resource-group keeper_automator_rg --vnet-name keeper_automator_vnet | grep "id"

Copy the full subnet ID path that ends with _subnet. It will look like this:

"id": "/subscriptions/abc123-abc123-abc-123/resourceGroups/keeper_automator_rg/providers/Microsoft.Network/virtualNetworks/keeper_automator_vnet/subnets/keeperautomator_appgw_subnet"

(9) Create YAML Container File

In your local filesystem, create a folder such as automator.

In that folder, create a file called automator.yml with your favorite editor that has the below contents.

automator.yml
apiVersion: '2021-07-01'
location: eastus
name: keeperautomatorcontainer
properties:
  containers:
  - name: keeperautomatorcontainer
    properties:
      image: keeper/automator:latest
      ports:
      - port: 443
        protocol: TCP
      resources:
        requests:
          cpu: 1.0
          memoryInGB: 1.5
      volumeMounts:
        - name: automatorvolume
          mountPath: /usr/mybin/config
  osType: Linux
  restartPolicy: Always
  sku: Standard
  volumes:
  - name: automatorvolume
    azureFile:
      shareName: keeperautomatorfileshare
      readOnly: false
      storageAccountName: keeperautomatorstorage
      storageAccountKey: XXX-YOUR-KEY-XXX
  subnetids:
    - id: /subscriptions/XXX-YOUR-SUBNET/path/to/subnets/keeper_automator_subnet
      name: keeper_automator_subnet
tags: null
type: Microsoft.ContainerInstance/containerGroups

Note there are several places where the string value needs to be changed based on your configuration in the prior steps.

  • subnet ID needs to match the full path of the ID retrieved from step 8

  • storageAccountName needs to match the value from Step 3

  • storageAccountKey needs to match the value from Step 7

(10) Upload the SSL Certificate and SSL Password Files

From the Azure interface, navigate to the Resource Group > Storage Account > File Share > into the Automator file share created. From here, upload the automator.yml file, SSL certificate file and SSL certificate password file.

Make sure your files are named automator.yml ssl-certificate.pfx and ssl-certificate-password.txt

Upload Files

(11) Copy the 3 files to your local CLI workspace

az storage copy -s https://keeperautomatorstorage.file.core.windows.net/keeperautomatorfileshare/automator.yml -d .

az storage copy -s https://keeperautomatorstorage.file.core.windows.net/keeperautomatorfileshare/ssl-certificate.pfx -d .

az storage copy -s https://keeperautomatorstorage.file.core.windows.net/keeperautomatorfileshare/ssl-certificate-password.txt -d .

(12) Create the Container Instance

Create the container using the configuration in automator.yml.

az container create -g keeper_automator_rg -f automator.yml

Obtain the Internal IP of the container in the response.

az container show --name keeperautomatorcontainer --resource-group keeper_automator_rg --query ipAddress.ip --output tsv

For later, set a variable of this IP, for example:

$aciPrivateIp=10.100.2.4
aciPrivateIp=10.100.2.4

(13) Create Application Gateway Subnet

az network vnet subnet create --name keeperautomator_appgw_subnet --resource-group keeper_automator_rg --vnet-name keeper_automator_vnet --address-prefix 10.100.1.0/24

(14) Create an Application Gateway

az network application-gateway create --name KeeperAutomatorAppGateway --location eastus --resource-group keeper_automator_rg --sku Standard_v2 --public-ip-address AGPublicIPAddress --cert-file ssl-certificate.pfx --cert-password XXXXXX --vnet-name keeper_automator_vnet --subnet keeperautomator_appgw_subnet --frontend-port 443 --http-settings-port 443 --http-settings-protocol Https --servers 10.100.2.4 --priority 100

Ensure that the SSL certificate password is replaced in the XXXXXX section.

(15) Locate the Public IP

In the Azure portal interface, navigate to the Resource Group > App Gateway and make note of the public IP address.

Retrieve the Public IP

(16) Route DNS

Ensure that the DNS for your Automator service (e.g. automator.company.com) is pointed to the IP address generated in Step 15 by the Azure Container service.

The DNS name must match the SSL certificate subject name or else requests will fail.

(17) Create a Health Probe

A health probe will inform the App Gateway that the Automator service is running. From the Azure portal interface, open the Automator App Gateway and then click on "Health probes" from the left menu.

Health Probes

Now create a new Health Probe with the settings as seen in the below screenshot. Make sure to replace the Host with the FQDN set up in Step 16.

Health Probe Settings

Click on "Test" and then add the probe. The test will succeed if the container IP is properly addressed to the host name.

(18) Configure the Web Application Firewall

From the Azure portal interface, open the Automator App Gateway and then click on "Web application firewall" on the left side. Enable the WAF V2 and configure the screen exactly as seen below.

Web application firewall configuration

Click on the "Rules" tab then select the Rule set to "OWASP 3.2" and then click on "Enabled" and "Save". This is a critical step.

Firewall Rules Configuration

🎉Your installation in Azure is complete.

The final step is to configure Automator using Keeper Commander.

(19) Install Keeper Commander

At this point, the service is running but it is not able to communicate with Keeper yet.

On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here: Installing Keeper Commander After Commander is opened, login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

My Vault> login [email protected]

(20) Initialize with Commander

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. Edit the URL with the FQDN you selected.

automator edit --url=https://automator.lurey.com --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For automated health checks, you can use the below URL:

https://<server>/health

Example curl command:

$ curl https://automator.lurey.com/health
OK

Note this URL will not open in a web browser.

(21) For environments using AD FS ...

When activating Keeper Automator with AD FS as the identity provider, users will not be able to login until you update the Keeper certificate using the instructions below:

  • Login to the Keeper Admin Console

  • Go to Admin > SSO Node > Provisioning and then view the SSO Cloud configuration.

  • Click on "Export SP Cert".

  • In the AD FS Management Console select the Keeper Cloud SSO Relying Party Trust properties.

  • On the "Encryption" tab, replace the old certificate with this new cert.

  • On the "Signature" tab, Add/Replace the new SP certificate with this new cert.

Setup Complete!

That's it, your Automator service should now be running.

Azure Portal

In the Azure Portal in the "Container Instances" system, you can see the container running. You can also connect to the container (using /bin/sh) and view running logs.

Containers
Logs
Connect using /bin/sh

Updating the IP on Container Restart

Based on this configuration, it is possible that restarting the container will assign a new IP address from the /24 subnet. To quickly locate the new IP and update the Application Gateway backend pool with the correct IP, the below script can be run from the Azure CLI.

# change these 3 variables according to your setup
RESOURCE_GROUP="keeper_automator_rg"
GATEWAY_NAME="KeeperAutomatorAppGateway"
CONTAINER_NAME="keeperautomatorcontainer"

BACKEND_POOL_NAME="appGatewayBackendPool"

CONTAINER_IP=$(az container show --resource-group $RESOURCE_GROUP --name $CONTAINER_NAME --query 'ipAddress.ip' --output tsv)

az network application-gateway address-pool update --resource-group $RESOURCE_GROUP --gateway-name $GATEWAY_NAME --name $BACKEND_POOL_NAME --servers $CONTAINER_IP

Testing the Automator Service

Now that Keeper Automator is deployed, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window or guest mode window and go to to the Keeper Web Vault and login with SSO Cloud. If you are not be prompted for device approval, the automator is functioning properly.

Vault Login
SSO Login
Automated Approval
Vault Decrypted

AWS Elastic Container Service

Running Keeper Automator with the AWS ECS (Fargate) service

Overview

This example demonstrates how to launch the Keeper Automator service in Amazon ECS in the most simple way, with as few dependencies as needed.

Requirements:

  • A managed SSL certificate via the AWS

(1) Create an Automator Config key

Generate a 256-bit AES key in URL-encoded format using one of the methods below, depending on your operating system:

Mac/Linux

openssl rand -base64 32

Windows

[Byte[]]$key = New-Object Byte[] 32; [System.Security.Cryptography.RNGCryptoServiceProvider]::Create().GetBytes($key); [System.Convert]::ToBase64String($key)

Save this value for the environmental variables set in the task definition.

(2) Create a VPC

If your VPC does not exist, a basic VPC with multiple subnets, a route table and internet gateway must be set up. In this example, there are 3 subnets in the VPC with an internet gateway as seen in the resource map below:

VPC Setup

(3) Create CloudWatch Log Group

If you would like to capture logs (recommended), Go to CloudWatch > Create log group

Name the log group "automator-logs".

Create CloudWatch Log Group

(4) Create an Execution IAM Role

Go to IAM > Create role

Select AWS service

Then search for Elastic Container Service and select it.

Select "Elastic Container Service Task" and click Next

Add the "AmazonECSTaskExecution" policy to the role and click Next

Assign the name "ECSTaskWritetoLogs" and then create the role.

Make note of the ARN for this Role, as it will be used in the next steps.

In this example, it is arn:aws:iam::373699066757:role/ECSTaskWritetoLogs

Make note of ARN

(5) Create a Security Group for ECS

Go to EC2 > Security Groups and click on "Create security group"

Depending on what region your Keeper tenant is hosted, you need to create inbound rules that allow https port 443 from the Keeper cloud. The list of IPs for each tenant location is located below. In the example below, this is the US data center.

  • We also recommend adding your workstation's external IP address for testing and troubleshooting.

Assign a name like "MyAutomatorService" and then click "Create".

Create ECS Security Group
Keeper Tenant Region
IP1
IP2

US

54.208.20.102/32

34.203.159.189/32

US GovCloud

18.252.135.74/32

18.253.212.59/32

EU

52.210.163.45/32

54.246.185.95/32

AU

3.106.40.41/32

54.206.208.132/32

CA

35.182.216.11/32

15.223.136.134/32

JP

54.150.11.204/32

52.68.53.105/32

Remember to add your own IP which you can find from this URL:

https://checkip.amazonaws.com

(6) Add the Security Group to the list of inbound rules

After saving the security group, edit the inbound rules again. This time, make the following additions:

  • Add HTTP port 8089 with the Source set to the security group. This allows traffic from the ALB to the container inside the network and for processing health checks.

Custom TCP port 8089

(7) Create Elastic Container Service Cluster

Navigate to the Amazon Elastic Container Service.

Select "Create cluster" and assign the cluster name and VPC. In this example we are using the default "AWS Fargate (serverless)" infrastructure option.

  • The Default namespace can be called "automator"

  • The "Infrastructure" is set to AWS Fargate (serverless)

  • Click Create

Create ECS Cluster with AWS Fargate

(8) Create ECS Task Definition

In your favorite text editor, copy the below JSON task definition file and save it.

Important: Make the following changes to the JSON file:

  • Change the XXX (REPLACE THIS) XXX on line 24 to the secret key created in Step 1 above.

  • Replace lines 37-39 with the name and location of the log group from Step 3

  • Change the XXX on line 44 for the role ID as specific to your AWS role (from Step 4, e.g. 373699066757)

{
    "family": "automator",
    "containerDefinitions": [
        {
            "name": "automator",
            "image": "keeper/automator:latest",
            "cpu": 1024,
            "portMappings": [
                {
                    "containerPort": 8089,
                    "hostPort": 8089,
                    "protocol": "tcp",
                    "appProtocol": "http"
                }
            ],
            "essential": true,
            "environment": [
                {
                    "name": "SSL_MODE",
                    "value": "none"
                },
                {
                    "name": "AUTOMATOR_CONFIG_KEY",
                    "value": "XXX (REPLACE THIS) XXX"
                },
                {
                    "name": "AUTOMATOR_PORT",
                    "value": "8089"
                }
            ],
            "mountPoints": [],
            "volumesFrom": [],
            "readonlyRootFilesystem": false,
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "automator-logs",
                    "awslogs-region": "eu-west-1",
                    "awslogs-stream-prefix": "container-2"
                }
            }
        }
    ],
    "executionRoleArn": "arn:aws:iam::XXX:role/ecsTaskExecutionRole",
    "networkMode": "awsvpc",
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "1024",
    "memory": "3072",
    "runtimePlatform": {
        "cpuArchitecture": "X86_64",
        "operatingSystemFamily": "LINUX"
    }
}

Next, go to Elastic Container Service > Task definitions > Create Task from JSON

Create task definition with JSON

Remove the existing JSON and copy-paste the modified JSON file into the box, then click Create.

This task definition can be modified according to your instance CPU/Memory requirements.

(9) Upload SSL Cert to AWS Certificate Manager

In order for an application load balancer in AWS to serve requests for Automator, the SSL certificate must be managed by the AWS Certificate Manager. You can either import or create a certificate that is managed by AWS.

  • From AWS console, open the "Certificate Manager"

  • Click on Request

  • Request a public certificate and click Next

  • Enter the domain name for the automator service, e.g. automator.lurey.com

  • Select your preferred validation method and key algorithm

  • Click Request

  • Click on the certificate request from the list of certs

If you use Route53 to manage the domain, you can click on the certificate and then select "Create Records in Route53" to instantly validate the domain and create the certificate. If you use a different DNS provider, you need to create the CNAME record as indicated on the screen.

Once create the CNAME record, the domain will validate within a few minutes.

This certificate will be referenced in Step 11 when creating the application load balancer.

(10) Create a Target Group

Go to EC2 > Target Groups and click Create target group

  • Select "IP Addresses" as the target type

  • Enter the Target group name of "automatortargetgroup" or whatever you prefer

  • Select HTTP Protocol with Port 8089

  • Select the VPC which contains the ECS cluster

  • Select HTTP1

  • Under Health checks, select the Health check protocol "HTTP"

  • Type /health as the Health check path

  • Expand the "Advanced health check settings"

  • Select Override and then enter port 8089

  • Click Next

  • Don't select any targets yet, just click Create target group

(11) Create Application Load Balancer (ALB)

Go to EC2 > Load balancers > Create load balancer

Select Application Load Balancer > Create

  • Assign name such as "automatornalb" or whatever you prefer

  • Scheme is "Internet-facing"

  • IP address type: IPv4

  • In the Network Mapping section, select the VPC and the subnets which will host the ECS service.

  • In the Security groups, select "MyAutomatorService" as created in Step 4.

  • In the Listeners and routing section, select HTTPS port 443 and in the target group select the Target group as created in the prior step (automatortargetgroup).

  • In the Secure listener settings, select the SSL certificate "from ACM" that was uploaded to the AWS Certificate Manager in Step 9.

  • Click Create load balancer

(12) Create ECS Service

Go to Elastic Container Service > Task definitions > Select the task created in Step 8.

From this Task definition, click on Deploy > Create Service

Create service

  • Select Existing cluster of "automator"

  • Assign Service name of "automatorservice" or whatever name you prefer

  • For the number of Desired tasks, set this to 1 for now. After configuration is complete, you can increase to the number of tasks you would like to have running.

  • Under Networking, select the VPC, subnets and replace the existing security group with the ECS security group created in Step 4. In this case, it is called "MyAutomatorService".

  • For Public IP, turn this ON.

  • Under Load balancing, select Load balancer type "Application Load Balancer"

  • Use an existing load balancer and select "automatoralb" created in Step 11.

  • Use an existing listener, and select the 443:HTTPS listener

  • Use an existing target group, and select the Target Group from Step 10

  • Set the Health check path to "/health"

  • Set the Health check protocol to "HTTP"

  • Click Create

After a few minutes, the service should start up.

(13) Update DNS

Assuming that the DNS name is hosted and managed by Route53:

Go to Route53 > Create or Edit record

  • Create an A-record

  • Set as "Alias"

  • Route traffic to "Alias to Application and Classic Load Balancer"

  • Select AWS Region

  • Select the "automatoralb" Application Load Balancer

  • Select "Simple Routing"

  • Select "Save"

The next step is to configure Automator using Keeper Commander while only having one task running.

(14) Install Keeper Commander

At this point, the service is running but it is not able to communicate with Keeper yet.

On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here: Installing Keeper Commander After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]
.
.
My Vault>

(15) Initialize with Commander

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

My Vault> automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For automated health checks, you can use the below URL:

https://<server>/health

Example curl command:

$ curl https://automator.lurey.com/health
OK

In this example setup, the load balancer will be forwarding /health checks to the target instances over HTTP port 8089.

(16) Testing the User Experience

Now that Keeper Automator is deployed with a single task running, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Vault Login
SSO Login
Automated Approval
Vault Decrypted

Assuming that the approval worked, you can now increase the number of tasks running.

(17) Update Task Definition (Optional)

The Keeper Automator service can run quite well on a single container without any issue due to the low number of requests that it processes. However if you would like to have multiple containers running, please follow the below steps:

  • Click on the "automatorservice" from the ECS services screen

  • Click on "Update service"

  • Select the checkbox on "Force new deployment"

  • Ensure that the latest task revision is selected

  • Set the Desired Tasks to the number of containers you would like running

  • Click "Update"

  • After a few minutes, the new containers will deploy. Wait until all containers are active.

  • Launch Keeper Commander (or it might still be open)

For every container, the automator setup, automator init and automator enable must be executed.

For example, if there are 3 containers running:

automator setup "My Automator"
automator setup "My Automator"
automator setup "My Automator"

automator init "My Automator"
automator init "My Automator"
automator init "My Automator"

automator enable "My Automator"
automator enable "My Automator"
automator enable "My Automator"

Logging and Monitoring

The Automator logs can be searched and monitored in the "Logs" tab of the ECS service, or in CloudWatch.

Logging and Monitoring

AWS Elastic Container Service with KSM (Advanced)

Running Keeper Automator with the AWS ECS (Fargate) service and Keeper Secrets Manager for secret storage

Overview

This example demonstrates how to launch the Keeper Automator service in Amazon ECS with Fargate, while also demonstrating the use of Keeper Secrets Manager for retrieving the secret configuration for the published Docker container.

Keeper Setup

Since this deployment requires the use of Keeper Secrets Manager, this section reviews the steps needed to set up your Keeper vault and the SSL certificate for publishing the Automator service.

(1) SSL Certificate

Create an SSL Certificate as described from this page

When this step is completed, you will have two files: ssl-certificate.pfx and ssl-certificate-password.txt

(2) Create Shared Folder

Create a Shared Folder in your vault. This folder will not be shared to anyone except the secrets manager application.

(3) Add File Attachments

Create a record in the Shared Folder, and make note of the Record UID. Upload the SSL certificate and SSL certificate password files to a Keeper record in the shared folder.

(4) Add Automator Property File

Upload a new file called keeper.properties which contains the following content:

ssl_mode=certificate
ssl_certificate_file=/config/ssl-certificate.pfx
ssl_certificate_file_password=
ssl_certificate_key_password=
automator_host=localhost
automator_port=443
disable_sni_check=true
persist_state=true

The notable line here is the disable_sni_check=true which is necessary when running the Automator service under a managed load balancer.

Your shared folder and record should look something like this:

Shared Folder containing a record with 3 files

(5) Create KSM Application

Create a Keeper Secrets Manager ("KSM") application in your vault. If you are not familiar with secrets manager, follow this guide. The name of this application is "Automator" but the name does not matter.

KSM Application

(6) Attach the KSM application to the shared folder

Edit the Shared Folder and add the Automator application to this folder.

Assign Application to Shared Folder

(7) Create a KSM Device Configuration

Open the secrets manager application, click on "Devices" tab and click "Add Device". Select a base64 configuration. Download and save this configuration for use in the ECS task definition.

Create base64 configuration

AWS Setup

(1) Create a VPC

If your VPC does not exist, a basic VPC with multiple subnets, a route table and internet gateway must be set up. In this example, there are 3 subnets in the VPC with an internet gateway as seen in the resource map below:

VPC Setup

(2) Create CloudWatch Log Group

Go to CloudWatch > Create log group

Name the log group "automator-logs".

Create CloudWatch Log Group

(3) Create an Execution IAM Role

Go to IAM > Create role

Select AWS service

Then search for Elastic Container Service and select it.

Select "Elastic Container Service Task" and click Next

Add the "AmazonECSTaskExecution" policy to the role and click Next

Assign the name "ECSTaskWritetoLogs" and then create the role.

Make note of the ARN for this Role, as it will be used in the next steps.

In this example, it is arn:aws:iam::373699066757:role/ECSTaskWritetoLogs

Make note of ARN

(4) Create a Security Group for ECS

Go to EC2 > Security Groups and click on "Create security group"

Depending on what region your Keeper tenant is hosted, you need to create inbound rules that allow https port 443. The list of IPs for each tenant location is on this page. In the example below, this is the US data center.

  • We also recommend adding your workstation's external IP address for testing and troubleshooting.

Assign a name like "MyAutomatorService" and then click "Create".

Create ECS Security Group

After saving the security group, edit the inbound rules again. This time, add HTTPS port 443 and select the security group in the drop-down. This will allow the load balancer to monitor health status and distribute traffic.

(5) Create Security Group for EFS

We'll create another security group that controls NFS access to EFS from the cluster.

Go to EC2 > Security Groups and click on "Create security group"

Set a name such as "MyAutomatorEFS".

Select Type of "NFS" and then select Custom and then the security group that was created in the prior step for the ECS cluster. Click "Create security group".

Note the security group ID for the next step. In this case, it is sgr-089fea5e4738f3898

(6) Create two Elastic File System volumes

At present, the Automator service needs access to two different folders. In this example setup, we are creating one volume to store the SSL certificate and SSL passphrase files. The second volume stores the property file for the Automator service. These 3 files are in your Keeper record.

Go to AWS > Elastic File System and click Create file system

Call it "automator_config" and click Create

Again, go to Elastic File System and click Create file system. Call this one automator_settings and click Create.

Note the File system IDs displayed. These IDs (e.g. fs-xxx) will be used in the ECS task definition.

After a minute, the 2 filesystems will be available. Click on each one and then select the "Network" tab then click on "Manage".

Manage the network security group on EFS

Change the security group for each subnet to the one created in the above step (e.g. "MyAutomatorEFS") and click Save. Make this network change to both filesystems that were created.

Change the security group on EFS

(7) Create Elastic Container Service Cluster

Navigate to the Amazon Elastic Container Service.

Select "Create cluster" and assign the cluster name and VPC. In this example we are using the default "AWS Fargate (serverless)" infrastructure option.

  • The Default namespace can be called "automator"

  • The "Infrastructure" is set to AWS Fargate (serverless)

  • Click Create

Create ECS Cluster with AWS Fargate

(8) Create ECS Task Definition

In your favorite text editor, copy the below JSON task definition file and save it.

Make the following changes to the JSON file:

  • Change the XXXCONFIGXXX value to a base64 config from Keeper Secrets Manager created in the beginning of this guide

  • Change the 3 instances of "XXXXX" to the Record UID containing the SSL certificate, SSL certificate password and settings file in your vault shared folder which KSM is accessing.

  • Change the two File system IDs (fs-XXX) to yours from the above steps

  • Change the XXX for the role ID as specific to your AWS role

  • Change the eu-west-1 value in two spots to the region of your ECS service

{
    "family": "automator",
    "containerDefinitions": [
        {
            "name": "init",
            "image": "keeper/keeper-secrets-manager-writer:latest",
            "cpu": 1024,
            "memory": 1024,
            "portMappings": [],
            "essential": false,
            "environment": [
                {
                    "name": "KSM_CONFIG",
                    "value": "XXXCONFIGXXX"
                },
                {
                    "name": "SECRETS",
                    "value": "XXXXX/file/ssl-certificate.pfx > file:/usr/mybin/config/ssl-certificate.pfx\nXXXXX/file/ssl-certificate-password.txt > file:/usr/mybin/config/ssl-certificate-password.txt\nXXXXX/file/keeper.properties > file:/usr/mybin/settings/keeper.properties"
                }
            ],
            "mountPoints": [
                {
                    "sourceVolume": "automatorconfig",
                    "containerPath": "/usr/mybin/config"
                },
                {
                    "sourceVolume": "automatorsettings",
                    "containerPath": "/usr/mybin/settings"
                }
            ],
            "volumesFrom": [],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "automator-logs",
                    "awslogs-region": "eu-west-1",
                    "awslogs-stream-prefix": "container-1"
                }
            }
        },
        {
            "name": "main",
            "image": "keeper/automator:latest",
            "cpu": 1024,
            "memory": 4096,
            "portMappings": [
                {
                    "containerPort": 443,
                    "hostPort": 443,
                    "protocol": "tcp"
                }
            ],
            "essential": true,
            "environment": [],
            "mountPoints": [
                {
                    "sourceVolume": "automatorconfig",
                    "containerPath": "/usr/mybin/config"
                },
                {
                    "sourceVolume": "automatorsettings",
                    "containerPath": "/usr/mybin/settings"
                }
            ],
            "volumesFrom": [],
            "dependsOn": [
                {
                    "containerName": "init",
                    "condition": "SUCCESS"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "automator-logs",
                    "awslogs-region": "eu-west-1",
                    "awslogs-stream-prefix": "container-2"
                }
            }
        }
    ],
    "executionRoleArn": "arn:aws:iam::XXX:role/ECSTaskWritetoLogs",
    "networkMode": "awsvpc",
    "volumes": [
        {
            "name": "automatorconfig",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-XXX",
                "rootDirectory": "/",
                "transitEncryption": "ENABLED"
            }
        },
        {
            "name": "automatorsettings",
            "efsVolumeConfiguration": {
                "fileSystemId": "fs-XXX",
                "rootDirectory": "/",
                "transitEncryption": "ENABLED"
            }
        }
    ],
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "cpu": "2048",
    "memory": "5120"
}

Next, go to Elastic Container Service > Task definitions > Create Task from JSON

Create task definition with JSON

Remove the existing JSON and copy-paste the contents of the JSON file above into the box, then click Create.

Task definition

This task definition can be modified according to your instance CPU/Memory requirements.

(9) Upload SSL Cert to AWS Certificate Manager

In order for an application load balancer in AWS to serve requests for Automator, the SSL certificate must be managed by the AWS Certificate Manager.

Go to AWS Certificate Manager and Click on Import

On your workstation, we need to convert the SSL certificate (.pfx) file to a PEM-encoded certificate body, PEM-encoded private key and PEM-encoded certificate chain.

Since you already have the .pfx file, this can be done using the below openssl commands:

Download the ssl-certificate.pfx file and the certificate password locally to your workstation and enter the below commands:

  • Generate the PEM-encoded certificate body

openssl pkcs12 -in ssl-certificate.pfx -out automator-certificate.pem -nodes
openssl x509 -in automator-certificate.pem -out certificate_body.crt
  • Generate the PEM-encoded private key

openssl pkey -in automator-certificate.pem -out private_key.key
  • Generate the PEM-encoded certificate chain

openssl crl2pkcs7 -nocrl -certfile automator-certificate.pem | openssl pkcs7 -print_certs -out certificate_chain.crt

Copy the contents of the 3 files into the screen, e.g.

cat certificate_body.crt | pbcopy
   (paste into Certificate body section)
   
cat private_key.key | pbcopy
   (paste into Certificate private key section)
   
cat certificate_chain.crt | pbcopy
   (paste into Certificate chain section)
Import Certificate for Automator Service

(10) Create a Target Group

Go to EC2 > Target Groups and click Create target group

  • Select "IP Addresses" as the target type

  • Enter the Target group name of "automatortargetgroup" or whatever you prefer

  • Select HTTPS Protocol with Port 443

  • Select the VPC which contains the ECS cluster

  • Select HTTP1

  • Under Health checks, select the Health check protocol "HTTPS"

  • Type /health as the Health check path

  • Click Next

  • Don't select any targets yet, just click Create target group

(11) Create Application Load Balancer (ALB)

Go to EC2 > Load balancers > Create load balancer

Select Application Load Balancer > Create

  • Assign name such as "automatornalb" or whatever you prefer

  • Scheme is "Internet-facing"

  • IP address type: IPv4

  • In the Network Mapping section, select the VPC and the subnets which will host the ECS service.

  • In the Security groups, select "MyAutomatorService" as created in Step 4.

  • In the Listeners and routing section, select HTTPS port 443 and in the target group select the Target group as created in the prior step (automatortargetgroup).

  • In the Secure listener settings, select the SSL certificate "from ACM" that was uploaded to the AWS Certificate Manager in Step 9.

  • Click Create load balancer

(12) Create ECS Service

Go to Elastic Container Service > Task definitions > Select the task created in Step 8.

From this Task definition, click on Deploy > Create Service

Create service

  • Select Existing cluster of "automator"

  • Assign Service name of "automatorservice" or whatever name you prefer

  • Important: For the number of Desired tasks, set this to 1 for right now. After configuration, we will increase to the number of tasks you would like to have running.

  • Under Networking, select the VPC, subnets and replace the existing security group with the ECS security group created in Step 4. In this case, it is called "MyAutomatorService".

  • For Public IP, turn this ON.

  • Under Load balancing, select Load balancer type "Application Load Balancer"

  • Use an existing load balancer and select "automatoralb" created in Step 11.

  • Use an existing listener, and select the 443:HTTPS listener

  • Use an existing target group, and select the Target Group from Step 10

  • Click Create

Environment Section of ECS Service
Deployment configuration
Networking section of ECS Service

After a few minutes, the service should start up.

(13) Update DNS

Assuming that the DNS name is hosted and managed by Route53:

Go to Route53 > Create or Edit record

  • Create an A-record

  • Set as "Alias"

  • Route traffic to "Alias to Application and Classic Load Balancer"

  • Select AWS Region

  • Select the "automatoralb" Application Load Balancer

  • Select "Simple Routing"

  • Select "Save"

The next step is to configure Automator using Keeper Commander while only having one task running.

(14) Install Keeper Commander

At this point, the service is running but it is not able to communicate with Keeper yet.

On your workstation, server or any computer, install the Keeper Commander CLI. This is just used for initial setup. The installation instructions including binary installers are here: Installing Keeper Commander After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login [email protected]
.
.
My Vault>

(15) Initialize with Commander

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

My Vault> automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://<application URL> --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Initialize the Automator with the new configuration

automator init "My Automator"

Enable the service

automator enable "My Automator"

At this point, the configuration is complete.

For automated health checks, you can use the below URL:

https://<server>/health

Example curl command:

$ curl https://automator.lurey.com/health
OK

In this example setup, the load balancer will be sending health checks to the target instances.

(16) Testing the User Experience

Now that Keeper Automator is deployed with a single task running, you can test the end-user experience. No prompts for approval will be required after the user authenticates with the SSO identity provider.

The easiest way to test is to open an incognito mode window to the Keeper Web Vault and login with SSO Cloud. You will not be prompted for device approval.

Vault Login
SSO Login
Automated Approval
Vault Decrypted

Assuming that the approval worked, you can now increase the number of tasks running.

(17) Update Task Definition

From the ECS screen, open the automator service and click "Update Service". Then set the number of tasks that you would like to have running.

Update service
Set the desired tasks

Logging and Monitoring

The Automator logs can be searched and monitored in the "Logs" tab of the ECS service, or in CloudWatch.

Logging and Monitoring
this page