Kubernetes Service

Installation of Keeper Automator as a Kubernetes service

This guide provides step-by-step instructions to publish Keeper Automator as a Kubernetes service.

Make sure you already have your SSL Certificate! If not, please follow the steps in the Create SSL Certificate page.

(1) Set up Kubernetes

Installation and deployment of Kubernetes is not the intent of this guide, however a very basic single-node environment using two EC2 instances (Master and Worker) without any platform dependencies is documented here for demonstration purposes. Skip to Step 2 assuming you already have your K8 environment running.

Set up Docker

Kubernetes requires a container runtime, and we will use Docker.

sudo yum update -y
sudo yum install -y docker
sudo systemctl enable docker
sudo systemctl start docker

Install kubeadm, kubelet, and kubectl

These packages need to be installed on both master and worker nodes. The example here is using AWS Amazon Linux 2 instance types.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Initialize the Master Node

On the machine you want to use as the master node, run:

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

The --pod-network-cidr argument is required for certain network providers. Substitute the IP range you want your pods to have.

After kubeadm init completes, it will give you a command that you can use to join worker nodes to the master. Make a note of the response and initialization code for the next step.

Set up the local kubeconfig:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a Pod Network

You need to install a Pod network before the cluster will be functional. For simplicity, you can use flannel:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Join Your Worker Nodes

On each machine you want to add as a worker node, the command below with the initialization code.

sudo kubeadm join [your code from kubeadm init command]

Note that port 6443 must be open between the worker and master node in your security group.

After the worker has been joined, the Kubernetes cluster should be up and running. You can check the status of your nodes by running kubectl get nodes on the master.

(2) Create a Kubernetes Secret

The SSL certificate for the Keeper Automator is provided to the Kubernetes service as a secret. To store the SSL certificate and SSL certificate password (created from the SSL Certificate guide), run the below command:

kubectl create secret generic certificate-secret --from-file=ssl-certificate.pfx --from-file=ssl-certificate-password.txt

(3) Create a Manifest

Below is a manifest file that can be saved as automator-deployment.yaml. This file contains configurations for both a Deployment resource and a Service resource.

  • The deployment resource runs the Keeper Automator docker container

  • The SSL certificate and certificate password files are referenced as a mounted secret

  • The secrets are copied over to the pod in an initialization container

  • The Automator service is listening on port 30000 and then routes to port 443 on the container.

  • In this step, we are only deploying a single container (replicas: 1) so that we can configure the container, and we will increase the number of replicas in the last step.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: automator-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: automator
  template:
    metadata:
      labels:
        app: automator
    spec:
      initContainers:
        - name: init-container
          image: busybox
          command: ['sh', '-c', 'cp /secrets/* /usr/mybin/config']
          volumeMounts:
            - name: secret-volume
              mountPath: /secrets
            - name: config-volume
              mountPath: /usr/mybin/config
      containers:
        - name: automator
          image: keeper/automator:latest
          ports:
            - containerPort: 443
          volumeMounts:
            - name: config-volume
              mountPath: /usr/mybin/config
      volumes:
        - name: config-volume
          emptyDir: {}
        - name: secret-volume
          secret:
            secretName: certificate-secret
            items:
              - key: ssl-certificate.pfx
                path: ssl-certificate.pfx
              - key: ssl-certificate-password.txt
                path: ssl-certificate-password.txt
---
apiVersion: v1
kind: Service
metadata:
  name: automator-service
spec:
  type: NodePort
  ports:
  - port: 443
    targetPort: 443
    protocol: TCP
    nodePort: 30000
  selector:
    app: automator

(4) Deploy the Service

kubectl apply -f automator-deployment.yaml

The service should start up within 30 seconds.

(5) Check Service Status

Confirm the service is running through a web browser (note that port 30000 must be opened from whatever device you are testing). In this case, the URL is: https://automator2.lurey.com:30000/api/rest/status

For automated health checks, you can also use the below URL:

https://<server>/health

Example:

$ curl https://automator2.lurey.com:30000/health
OK

Now that the service with a single pod is running, you need to integrate the Automator into your environment using Keeper Commander.

(6) Configure the Pod with Commander

Keeper Commander is required to configure the pod to perform automator functions. This can be run from anywhere.

On your workstation, install Keeper Commander CLI. The installation instructions including binary installers are here: https://docs.keeper.io/secrets-manager/commander-cli/commander-installation-setup After Commander is installed, you can type keeper shell to open the session, then login using the login command. In order to set up Automator, you must login as a Keeper Administrator, or an Admin with the ability to manage the SSO node.

$ keeper shell

My Vault> login admin@company.com

  _  __  
 | |/ /___ ___ _ __  ___ _ _ 
 | ' </ -_) -_) '_ \/ -_) '_|
 |_|\_\___\___| .__/\___|_|
 vxx.x.xx     |_|

Logging in to Keeper Commander

SSO user detected. Attempting to authenticate with a master password.
(Note: SSO users can create a Master Password in Web Vault > Settings)

Enter password for admin@company.com
Password: 
Successfully authenticated with Master Password
Syncing...
Decrypted [58] record(s)

My Vault>

Login to Keeper Commander and activate the Automator using a series of commands, starting with automator create

My Vault> automator create --name="My Automator" --node="Azure Cloud"

The Node Name (in this case "Azure Cloud") comes from the Admin Console UI as seen below.

The output of the command will display the Automator settings, including metadata from the identity provider.

                    Automator ID: 1477468749950
                            Name: My Automator
                             URL: 
                         Enabled: No
                     Initialized: No
                          Skills: Device Approval

Note that the "URL" is not populated yet. So let's do that next.

Run the "automator edit" command as displayed below, which sets the URL and also sets up the skills (team, team_for_user and device).

automator edit --url https://automator2.lurey.com:30000 --skill=team --skill=team_for_user --skill=device "My Automator"

Next we exchange keys: The enterprise private key encrypted with the Automator public key is provided to Automator:

automator setup "My Automator"

Next, send other IdP metadata to the Automator:

automator init "My Automator"

Enable the Automator service

automator enable "My Automator"

At this point, the configuration is complete.

(7) Securing the Service

We recommend limiting network access to the service from Keeper's servers and your own workstation. Please see the Ingress Requirements section for a list of Keeper IP addresses to allow.

(8) Test the Automator Service

To ensure that the Automator service is working properly with a single pod, follow the below steps:

  • Open a web browser in an incognito window

  • Login to the Keeper web vault using an SSO user account

  • Ensure that no device approvals are required after successful SSO login

(9) Update the Pod configuration

At this point, we are running a single pod configuration. Now that the first pod is set up with the Automator service and configured with the Keeper cloud, we can increase the number of pods.

Update the "replicas" statement in the YAML file with the number of pods you would like to run. For example:

replicas: 3

Then apply the change:

kubectl apply -f automator-deployment.yaml

With more than one pod running, the containers will be load balanced in a round-robin type of setup. The Automator pods will automatically and securely load their configuration settings from the Keeper cloud upon the first request for approval.

Troubleshooting the Automator Service

The log files running the Automator service can be monitored for errors. To get a list of pods:

kubectl get pods

Connect via terminal to the Automator container using the below command:

kubectl exec -it automator-deployment-<POD> --container automator -- /bin/sh

The log files are located in the logs/ folder. Instead of connecting to the terminal, you can also just tail the logfile of the container from this command:

kubectl exec -it automator-deployment-<POD> --container automator -- tail -f /usr/mybin/logs/keeper-automator.log

Last updated