Kubernetes
Keeper Secrets Manager integration into Kubernetes for dynamic secrets retrieval

Prerequisites

This page documents the Secrets Manager Kubernetes integration. In order to utilize this integration, you will need:
  • Keeper Secrets Manager access (See the Quick Start Guide for more details)
    • Secrets Manager addon enabled for your Keeper account
    • Membership in a Role with the Secrets Manager enforcement policy enabled
  • A Keeper Secrets Manager Application with secrets shared to it
  • An initialized Keeper Secrets Manager Configuration
    • The Kubernetes integration accepts JSON and Base64 format configurations

About

Keeper Secrets Manager can be integrated into your K8s cluster for accessing Keeper secrets in real-time across all pods.

Setup

Create a Secrets Manager Client Device

Using Commander, create a Secrets Manager device configuration for Kubernetes. Note that this configuration is not IP locked and it is pre-initialized.
1
My Vault> sm client add --app MyAdd --unlock-ip --config-init k8s
2
3
Successfully generated Client Device
4
====================================
5
6
Initialized Config:
7
8
apiVersion: v1
9
data:
10
config: ewog2N...ICIxMCIKfQ==
11
metadata:
12
name: ksm-config
13
namespace: default
14
type: Opaque
15
16
IP Lock: Disabled
17
Token Expires On: 2021-10-13 12:45:45
18
App Access Expires on: Never
Copied!
In the example above, copy lines 8 through 14 and place them into a file called secret.yaml. Then, If you are using a machine with Kubectl installed and you have access to your cluster, add the KSM SDK config to your Kubernetes secrets.
1
$ kubectl apply -f secret.yaml
Copied!

Alternative Method using One Time Access Token

Alternatively, you can create a configuration by generating a One Time Access Token with Commander (or the Vault UI) and then using the Keeper Secret Manager CLI to create a configuration as demonstrated below (replace XX:XXX) with the One Time Access Token.
1
$ ksm init k8s XX:XXX
2
3
apiVersion: v1
4
data:
5
config: ewog2N[...]ICIxMCIKfQ==
6
kind: Secret
7
metadata:
8
name: ksm-config
9
namespace: default
10
type: Opaque
Copied!
If you are using a machine with Kubectl installed and you have access to your cluster, the parameter --apply can be set to automatically add the KSM SDK config to your Kubernetes secrets.
The output of redeeming the token can be piped to a file and then applied via Kubectl. For example:
1
$ ksm init k8s XX:XXX > secret.yaml
2
$ kubectl apply -f secret.yaml
3
secret/ksm-config created
Copied!

Using the KSM Config

The KSM config can be pulled into your K8s containers using secrets.
1
apiVersion: v1
2
kind: Pod
3
metadata:
4
name: secret-env-pod
5
spec:
6
containers:
7
- name: mycontainer
8
image: my_container:XXXXX
9
env:
10
- name: KSM_CONFIG
11
valueFrom:
12
secretKeyRef:
13
name: ksm-config
14
key: config
15
restartPolicy: Never
Copied!
At runtime, the Keeper Developer SDKs running in the K8s cluster will use the environment variable KSM_CONFIG to retrieve the device configuration and communicate with the Keeper Vault.

Example

Below is a simple example that will generate a deployment and service that displays database secrets via a web application. This example uses the Keeper Python Developer SDK for the web application. The SDK will get its configuration from a Kubernetes secret and then retrieve PostgreSQL database record information from the Keeper vault.
From the Keeper Vault, a "Database" record type is created using the following information.
Now let's create our web application. The web page can be created using any of the Developer SDKs. For this example, it will being using the Python SDK. A simple Flask application with one endpoint will display the HTML that contains the Vault record secrets. The secrets are retrieved using the Keeper Notation syntax.
1
from flask import Flask
2
from keeper_secrets_manager_core import SecretsManager
3
import os
4
5
app = Flask(__name__)
6
7
@app.route("/")
8
def hello_world():
9
sm = SecretsManager()
10
return """
11
<h1>Database</h1>
12
<ul>
13
<li>Type: {}</li>
14
<li>Host: {}</li>
15
<li>Port: {}</li>
16
<li>Login: {}</li>
17
<li>Password: {}</li>
18
</ul>
19
""".format(
20
sm.get_notation(os.environ.get("DB_TYPE")),
21
sm.get_notation(os.environ.get("DB_HOST")),
22
sm.get_notation(os.environ.get("DB_PORT")),
23
sm.get_notation(os.environ.get("DB_LOGIN")),
24
sm.get_notation(os.environ.get("DB_PASS")))
Copied!
The next part is creating a Dockerfile. The Dockerfile below is based off of the Python Debian images from Docker Hub.
The Python SDK uses the cryptography module. This module requires the language Rust to be installed. Other Docker Hub images may provide Rust pre-installed.
1
FROM python:3.10.0-slim-bullseye
2
RUN apt-get update \
3
&& apt-get install -y gcc make libffi-dev curl libssl-dev \
4
&& apt-get clean
5
RUN pip3 install --upgrade pip wheel
6
7
# cryptology requires Rust to build
8
RUN curl https://sh.rustup.rs -sSf > /tmp/rust.sh \
9
&& chmod a+x /tmp/rust.sh \
10
&& /tmp/rust.sh -y
11
ENV PATH $PATH:/root/.cargo/bin
12
13
RUN pip3 install \
14
flask \
15
keeper-secrets-manager-core
16
17
RUN groupadd -g 5000 demo
18
RUN useradd -g demo demo
19
20
# Copy our application into the image
21
COPY demo.py /demo.py
22
23
USER demo
24
25
ENV FLASK_APP demo
26
27
EXPOSE 5000
28
CMD ["flask", "run"]
Copied!
Next we will build a Docker image named ksm_demo.
1
$ docker build -t ksm_demo .
Copied!
Assuming there is access to the Kubernetes cluster, the config can be generated from the One Time Access Token and automatically applied.
1
$ ksm redeem k8s --apply XX:XXXXXXXXXXX
2
secret/ksm-config created
3
Created secret for KSM config.
Copied!
You can see the secret entry when you enter kubectl get secret.
1
$ kubectl get secret ksm-config
2
NAME TYPE DATA AGE
3
ksm-config Opaque 1 55s
Copied!
You can then make a deployment and service for the ksm_demo Docker image. This example is going to name the file ksm_demo.yaml.
Defining which secrets you need, and the config for the SDK, happen in the env section in the list of containers. In this section the KSM_CONFIG environmental variable is defined to get it's value from the ksm-config Kubernetes secret, specifically the config key of the secret.
The other environmental variables are just list of name/values. The value is Keeper Notation which the web application will send to the notation retrieval method of the SDK.
The second record in the ksm_demo.yaml file is the Service definition. This can be changed to whatever works with your Kubernetes cluster. In our example, it uses an external IP address. It's safe to use one of your Kubernetes nodes IP address, 10.0.1.18 for the example.
1
---
2
apiVersion: apps/v1
3
kind: Deployment
4
metadata:
5
name: ksm-demo-deployment
6
labels:
7
app: ksm-demo
8
spec:
9
replicas: 1
10
selector:
11
matchLabels:
12
app: ksm-demo
13
template:
14
metadata:
15
labels:
16
app: ksm-demo
17
spec:
18
nodeSelector:
19
kubernetes.io/hostname: work
20
dnsPolicy: "None"
21
dnsConfig:
22
nameservers:
23
- 10.0.1.207
24
- 1.1.1.1
25
containers:
26
- name: ksm-demo
27
image: ksm_demo:latest
28
imagePullPolicy: IfNotPresent
29
ports:
30
- containerPort: 5000
31
protocol: TCP
32
env:
33
- name: KSM_CONFIG
34
valueFrom:
35
secretKeyRef:
36
name: ksm-config
37
key: config
38
- name: DB_TYPE
39
value: "keeper://IUCvqyWcx7sG-BGIK1R9-g/field/Type"
40
- name: DB_HOST
41
value: "keeper://IUCvqyWcx7sG-BGIK1R9-g/field/host[hostName]"
42
- name: DB_PORT
43
value: "keeper://IUCvqyWcx7sG-BGIK1R9-g/field/host[port]"
44
- name: DB_LOGIN
45
value: "keeper://IUCvqyWcx7sG-BGIK1R9-g/field/login"
46
- name: DB_PASS
47
value: "keeper://IUCvqyWcx7sG-BGIK1R9-g/field/password"
48
---
49
apiVersion: v1
50
kind: Service
51
metadata:
52
name: ksm-demo-service
53
spec:
54
ports:
55
- name: http
56
port: 5000
57
targetPort: 5000
58
protocol: TCP
59
selector:
60
app: ksm-demo
61
externalIPs:
62
- 10.0.1.18
Copied!
At this point the deployment and service are ready to be applied.
1
$ kubectl apply -f ksm_demo.yaml
2
deployment/ksm-demo-deployment created
3
service/ksm-demo-service created
Copied!
Wait until your deployment is ready. Either monitor it via the command line or Kubernetes Dashboard.
1
$ kubectl get deployment ksm-demo-deployment
2
NAME READY UP-TO-DATE AVAILABLE AGE
3
ksm-demo-deployment 1/1 1 1 46m
4
5
$ kubectl get svc ksm-demo-service
6
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
7
ksm-demo-service ClusterIP 10.107.91.89 10.0.1.18 5000/TCP 56m
Copied!
Finally, use a web browser and go to the external IP address, port 5000, and you will see the Keeper vault record database secrets.
Example Web Application displaying Keeper secrets

Next Steps

At this point, you can now integrate Keeper Secrets Manager into your K8s deployments using any of the Secrets Manager SDKs.
Last modified 26d ago