Comment on page
Kubernetes Service
Installation of Keeper Automator as a Kubernetes service

This guide provides step-by-step instructions to publish Keeper Automator as a Kubernetes service.
Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.
Installation and deployment of Kubernetes is not the intent of this guide, however a very basic single-node environment using two EC2 instances (Master and Worker) without any platform dependencies is documented here for demonstration purposes. Skip to Step 2 assuming you already have your K8 environment running.
Kubernetes requires a container runtime, and we will use Docker.
sudo yum update -y
sudo yum install -y docker
sudo systemctl enable docker
sudo systemctl start docker
These packages need to be installed on both master and worker nodes. The example here is using AWS Amazon Linux 2 instance types.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
On the machine you want to use as the master node, run:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
The
--pod-network-cidr
argument is required for certain network providers. Substitute the IP range you want your pods to have.After
kubeadm init
completes, it will give you a command that you can use to join worker nodes to the master. Make a note of the response and initialization code for the next step.Set up the local kubeconfig:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You need to install a Pod network before the cluster will be functional. For simplicity, you can use
flannel
:kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
On each machine you want to add as a worker node, the command below with the initialization code.
sudo kubeadm join [your code from kubeadm init command]
Note that port 6443 must be open between the worker and master node in your security group.
After the worker has been joined, the Kubernetes cluster should be up and running. You can check the status of your nodes by running
kubectl get nodes
on the master.The SSL certificate for the Keeper Automator is provided to the Kubernetes service as a secret. To store the SSL certificate and SSL certificate password (created from the SSL Certificate guide), run the below command:
kubectl create secret generic certificate-secret --from-file=ssl-certificate.pfx --from-file=ssl-certificate-password.txt
Below is a manifest file that can be saved as
automator-deployment.yaml
. This file contains configurations for both a Deployment resource and a Service resource.- The deployment resource runs the Keeper Automator docker container
- The SSL certificate and certificate password files are referenced as a mounted secret
- The secrets are copied over to the pod in an initialization container
- The Automator service is listening on port 30000 and then routes to port 443 on the container.
- In this step, we are only deploying a single container (replicas: 1) so that we can configure the container, and we will increase the number of replicas in the last step.
1
apiVersion: apps/v1
2
kind: Deployment
3
metadata:
4
name: automator-deployment
5
spec:
6
replicas: 1
7
selector:
8
matchLabels:
9
app: automator
10
template:
11
metadata:
12
labels:
13
app: automator
14
spec:
15
initContainers:
16
- name: init-container
17
image: busybox
18
command: ['sh', '-c', 'cp /secrets/* /usr/mybin/config']
19
volumeMounts:
20
- name: secret-volume
21
mountPath: /secrets
22
- name: config-volume
23
mountPath: /usr/mybin/config
24
containers:
25
- name: automator
26
image: keeper/automator:latest
27
ports:
28
- containerPort: 443
29
volumeMounts:
30
- name: config-volume
31
mountPath: /usr/mybin/config
32
volumes:
33
- name: config-volume
34
emptyDir: {}
35
- name: secret-volume
36
secret:
37
secretName: certificate-secret
38
items:
39
- key: ssl-certificate.pfx
40
path: ssl-certificate.pfx
41
- key: ssl-certificate-password.txt
42
path: ssl-certificate-password.txt
43
---
44
apiVersion: v1
45
kind: Service
46
metadata:
47
name: automator-service
48
spec:
49
type: NodePort
50
ports:
51
- port: 443
52
targetPort: 443
53
protocol: TCP
54
nodePort: 30000
55
selector:
56
app: automator
kubectl apply -f automator-deployment.yaml
The service should start up within 30 seconds.
Confirm the service is running through a web browser (note that port 30000 must be opened from whatever device you are testing).
In this case, the URL is: https://automator2.lurey.com:30000/api/rest/status
For automated health checks, you can also use the below URL:
https://<server>/health
Example:
$ curl https://automator2.lurey.com:30000/health
OK
Now that the service with a single pod is running, you need to integrate the Automator into your environment using Keeper Commander.
Keeper Commander is required to configure the pod to perform automator functions. This can be run from anywhere.
On your workstation, install Keeper Commander CLI. The installation instructions including binary installers are here:
https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup
After Commander is installed, you can type
keeper shell
to open the session, then login using the login
command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node. $ keeper shell
My Vault> login [email protected]
_ __
| |/ /___ ___ _ __ ___ _ _
| ' </ -_) -_) '_ \/ -_) '_|
|_|\_\___\___| .__/\___|_|
vxx.x.xx |_|
Logging in to Keeper Commander
SSO user detected. Attempting to authenticate with a master password.
(Note: SSO users can create a Master Password in Web Vault > Settings)
Enter password for [email protected]
Password:
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)
My Vault>
Login to Keeper Commander and activate the Automator using a series of commands, starting with
automator create
My Vault> automator create --name="My Automator" --node="Azure Cloud"
The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

Automator Create
The output of the command will display the Automator settings, including metadata from the identity provider.
Automator ID: 1477468749950
Name: My Automator
URL:
Enabled: No
Initialized: No
Skills: Device Approval
Note that the "URL" is not populated yet. So let's do that next.
automator edit --url=https://automator2.lurey.com:30000 --skill=team_for_user --skill=device "My Automator"
Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:
automator setup "My Automator"
Next, send other IdP metadata to the Automator:
automator init "My Automator"
Enable the Automator service
automator enable "My Automator"
At this point, the configuration is complete.
We recommend limiting network access to the service from Keeper's servers and your own workstation. Please see the Ingress Requirements section for a list of Keeper IP addresses to allow.
To ensure that the Automator service is working properly with a single pod, follow the below steps:
- Open a web browser in an incognito window
- Login to the Keeper web vault using an SSO user account
- Ensure that no device approvals are required after successful SSO login
At this point, we are running a single pod configuration. Now that the first pod is set up with the Automator service and configured with the Keeper cloud, we can increase the number of pods.
Update the "replicas" statement in the YAML file with the number of pods you would like to run. For example:
replicas: 3
Then apply the change:
kubectl apply -f automator-deployment.yaml
With more than one pod running, the containers will be load balanced in a round-robin type of setup. The Automator pods will automatically and securely load their configuration settings from the Keeper cloud upon the first request for approval.
The log files running the Automator service can be monitored for errors. To get a list of pods:
kubectl get pods
Connect via terminal to the Automator container using the below command:
kubectl exec -it automator-deployment-<POD> --container automator -- /bin/sh
The log files are located in the logs/ folder. Instead of connecting to the terminal, you can also just tail the logfile of the container from this command:
kubectl exec -it automator-deployment-<POD> --container automator -- tail -f /usr/mybin/logs/keeper-automator.log
Last modified 1mo ago