Docker Runtime

Retrieve secrets from Keeper Secrets Manager at Docker runtime

Features

  • Dynamically retrieve secrets from the Keeper Vault when Docker containers execute

For a complete list of Keeper Secrets Manager features see the Overview

Prerequisites

This page documents the Secrets Manager Docker Runtime integration. In order to utilize this integration, you will need:

About

Keeper Secrets Manager integrates with the Docker Runtime so that you can dynamically retrieve a secret from the vault when the container executes.

The ksm command is used to set environment variables when the container is started instead of hard-coding them into a deployment script. A real world example of this implementation is demonstrated below.

Example: Provision MySQL network user account

The official MySQL docker allows a user to set the MySQL root password and create a network accessible user via environment variables. The MySQL instance is then provisioned when a container is run.

The official MySQL dockerfile is below:

FROM debian:buster-slim
	
...
... INSTALL MySQL 8.0 SERVER
...
	
ENTRYPOINT ["docker-entrypoint.sh"]

EXPOSE 3306 33060
CMD ["mysqld"]

In the standard implementation, the ENTRYPOINT does the provisioning of the container and will use environmental variables that are passed in to set up MYSQL. The environmental variables referenced are the following:

  • MYSQL_ROOT_PASSWORD

  • MYSQL_USER

  • MYSQL_PASSWORD

  • MYSQL_DATABASE

The below steps will show how to initialize the MySQL database with secrets that are stored in the Keeper Vault.

Step 1: Create 2 Vault Records with Secrets

Create two records in the Vault that are managed by the Secrets Manager application. One record contains the root password. The other record contains the regular user, password and database values.

Make sure to copy the Record UID that appears in the vault records. These are used in Step 3 below when referencing the vault secrets.

Step 2: Create dockerfile that builds on the default MySQL dockerfile

We'll create a dockerfile that installs Keeper Secrets Manager CLI (ksm) and then wraps the ENTRYPOINT with ksm exec

In the below dockerfile, the 4 environment variables are replaced using Keeper Notation. We are also passing in the Secrets Manager profile that points to the vault where the secrets are stored.

FROM mysql:debian

ARG BUILD_KSM_INI_CONFIG
ARG BUILD_ROOT_UID
ARG BUILD_USER_UID

RUN apt-get update && \
  apt-get install -y python3 python3-pip python3-venv && \
  apt-get clean

# Avoid system installed modules that might interfer.
ENV VIRTUAL_ENV /venv
RUN python3 -m pip install --upgrade pip && \
  	python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

# Upgrade pip since the distro's Python might be old enough that it doesn't like to install newer modules.
RUN pip3 install --upgrade pip

# Install Keeper Secrets Manager CLI
RUN pip3 install keeper-secrets-manager-cli

# Import our configuration, decode it, and store it a place where ksm can find it.
RUN ksm profile import $(printenv --null BUILD_KSM_INI_CONFIG)

ENV MYSQL_ROOT_PASSWORD keeper://${BUILD_ROOT_UID}/field/password
ENV MYSQL_USER          keeper://${BUILD_USER_UID}/field/login
ENV MYSQL_PASSWORD      keeper://${BUILD_USER_UID}/field/password
ENV MYSQL_DATABASE      keeper://${BUILD_USER_UID}/custom_field/database

ENTRYPOINT ["ksm", "exec", "--", "docker-entrypoint.sh"]

​Step 3: Create a shell script to execute the docker build

To execute the docker build, the below script will pass in the Secrets Manager device configuration, root user Record UID and network user Record UID from the vault that contains the secrets.

#!/bin/sh

​export CF=$(ksm profile export)

docker build \
  --build-arg "BUILD_KSM_INI_CONFIG=${CF}" \
  --build-arg "BUILD_ROOT_UID=DvpMcO4xV5nZF6jqLGF1fQ" \
  --build-arg "BUILD_USER_UID=VNxZvvNAZ8j2mL4WIjEzjg" \
  -t mysql_custom \
  .
  

Example: Using KSM CLI Docker Image

The KSM CLI Docker includes a volume mount to both GLIBC (most Linux distributions) and MUSL (Alpine Linux) CLI binaries. The volume is /cli. This directory can be mounted into another container using the volumes_from in docker-compose or -v from command line docker. The ksm executables exists in directory based on the version of C library your Linux distribution is using.

  • /cli/glibc/ksm - For standard GLIBC distributions like Ubuntu, Debian, Fedora, and CentOS.

  • /cli/musl/ksm - For Alpine Linux.

For example, the following is simple framework showing how to access the CLI binary.

---
version: "2"
services:
  init:
    image: keeper/keeper-secrets-manager-cli:latest
  main:
    image: ubuntu:latest
    volumes_from:
      - init:ro
    command: [ '/cli/glibc/ksm', 'exec', 'printenv', 'MY_LOGIN' ]
    environment:
      KSM_CONFIG: ewog ... M09IemdQMnc9Igp9
      MY_LOGIN: keeper://bf18xLR3aVut5eYy7oIZZZ/field/login
      LC_ALL: C.UTF-8
      LANG: C.UTF-8
    depends_on:
      init:
        condition: service_completed_successfully

The init service will load the CLI docker. The container will start, display a CLI splash screen, and then will exit. Even though the container has stopped, the /cli volume is still accessible from other containers.

The main service will mount the CLI docker's volume under the directory /cli using volumes_from. The command is overridden to run the GLIBC version of the KSM CLI. The command is using the exec function of the CLI. That will replace environment variables environment variable, that use the Keeper Notation, with a secret value. The exec command, of the CLI, is running the printenv application. That will print the environment variable, MY_LOGIN, that has been set to Keeper Notation, and has had its value replaced with a secret by the exec command.

$ example : docker-compose up
[+] Running 2/0
 ⠿ Container example-init-1  Created                                                                                                                      0.0s
 ⠿ Container example-main-1  Recreated                                                                                                                    0.1s
Attaching to example-init-1, example-main-1
example-init-1  |
example-init-1  | ██╗  ██╗███████╗███╗   ███╗     ██████╗██╗     ██╗
example-init-1  | ██║ ██╔╝██╔════╝████╗ ████║    ██╔════╝██║     ██║
example-init-1  | █████╔╝ ███████╗██╔████╔██║    ██║     ██║     ██║
example-init-1  | ██╔═██╗ ╚════██║██║╚██╔╝██║    ██║     ██║     ██║
example-init-1  | ██║  ██╗███████║██║ ╚═╝ ██║    ╚██████╗███████╗██║
example-init-1  | ╚═╝  ╚═╝╚══════╝╚═╝     ╚═╝     ╚═════╝╚══════╝╚═╝
example-init-1  |
example-init-1  | Current Version: 1.0.13
example-init-1  |
example-init-1  | Running in shell mode. Type 'quit' to exit.
example-init-1  |
example-init-1 exited with code 0
example-main-1  | john.smith@localhost
example-main-1 exited with code 0

Example: Using KSM CLI Docker With Other Vendor Docker Images

Similar to the examples above, the KSM CLI docker can be used to override the entrypoint and command for another vendor's Docker image without creating a custom Docker image.

This example, combines the first two examples.

For this example, it will be assumed the Docker image is being severed by the Docker Hub repository and the images' Dockerfile is stored on GitHub.

Operating System Distribution

The first step is to determine what operating system distribution the vendor's Docker image is built upon. Often this can be determined by the tag name. For example, if the name has "alpine" in the image tag name, you'll know it's the Alpine Linux distribution.

If the image tag name does not indicate the distribution, then on the Docker Hub web page for the image, click on the tag name in the "Supported tags" section. This will display the content of the Dockerfile. The FROM statement will indicate the distribution the vendor has built their image upon. If it is not apparent from the FROM statement, you man check the Dockerfile of the FROM image due to inheritance.

MySQL 8.0.31 doesn't indicate the operating system distribution in the tag name. On the MySQL Docker Hub page, the 8.0.31 tag links to their GitHub repo. From the Dockerfile we can see the distribution is Oracle Linux.

The purpose of checking the distribution is to determine what version of the libc library is being used. Most distributions use GLIBC but some, mainly Alpine Linux, uses MUSL. This is needed to select the correct binary from the KSM CLI Docker image. If you select the wrong one, you will get an error like exec /cli/musl/ksm: no such file or directory or exec /cli/glibc/ksm: no such file or directory. For our example, Oracle Linux is a GLIBC distribution.

Entrypoint and Command

The next step is to determine the vendor's Docker image ENTRYPOINT and CMD. The Dockerfile will list the ENTRYPOINT and/or CMD.

From the MySQL Dockerfile, the ENTRYPOINT is ["docker-entrypoint.sh"] and the CMD is ["mysqld"]. This means the ENTRYPOINT will be prepended to the CMD, so when the container is started docker-entrypoint.sh mysqld will be executed.

docker-compose.yml

The docker-compose uses two services.

The init service loads the keeper/keeper-secrets-manager-cli Docker image volumes. This image image will start and exit, however the volumes will still be accessible after it exits.

The main service will run after the init service. This is done by using the docker-composes depends_on directive. This service contains environment variables, with notation, that will be replaced by the KSM CLI exec command and also includes the Base64 encoded configuration needed by the KSM CLI. The MYSQL_ environmental variables are used by the MySQL Docker image to provision the database.

The main services also will mount the volumes from the init service using the volumes_from. The KSM CLI Docker image defined that volumes are exported, and where they are mounted in the main service container. The binaries are mounted in /cli, followed by the libc version, and the ksm binary name.

version: '3.0'
services:
  init:
    image: keeper/keeper-secrets-manager-cli:latest
  main:
    image: mysql:8.0
    environment:
      KSM_CONFIG: "ewog .... RQ3pQMnc9Igp9"
      MYSQL_USER: "keeper://KOJLz4Wzbqfi9xUO-VMViA/field/login"
      MYSQL_PASSWORD: "keeper://KOJLz4Wzbqfi9xUO-VMViA/field/password"
      MYSQL_ROOT_PASSWORD: "keeper://KOJLz4Wzbqfi9xUO-VMViA/custom_field/Root Password"
      MYSQL_DATABASE: "keeper://KOJLz4Wzbqfi9xUO-VMViA/custom_field/Database"
    depends_on:
      init:
        condition: service_completed_successfully
    entrypoint: ["/cli/glibc/ksm", "exec", "docker-entrypoint.sh"]
    command: ["mysqld"]
    ports:
      - "3306:3306"
    volumes_from:
      - init:ro

Since the MySQL image uses the Oracle distribution, a GLIBC distribution, the main service will use the /cli/glibc/ksm binary.

The main service will override the ENTRYPOINT and CMD of the MySQL image. This is done using entrypoint and command. The entrypoint will use the KSM CLI exec command to run the original ENTRYPOINT docker-entrypoint.sh. The command is the same, however it needs to be set in the docker-compose.yml else the service will just exit.

Based on the Docker image you are using, you may need override either ENTRYPOINT or CMD, or both.

Results

When the services are brought up. The init service will run first and then exit with a code 0, which means it exited successfully execute. Then the main services will start up, execute the KSM CLI exec command and run docker-entrypoint.sh with mysqld. At this point the environmental variable have been replaced with secrets, MySQL has been provisioned, and mysqld is running.

$ my_mysql : docker-compose up
[+] Running 3/3
 ⠿ Network my_mysql_default   Created                                                                                                          0.0s
 ⠿ Container my_mysql-init-1  Created                                                                                                          0.1s
 ⠿ Container my_mysql-main-1  Created                                                                                                          0.0s
Attaching to my_mysql-init-1, my_mysql-main-1
my_mysql-init-1  |
my_mysql-init-1  | ██╗  ██╗███████╗███╗   ███╗     ██████╗██╗     ██╗
my_mysql-init-1  | ██║ ██╔╝██╔════╝████╗ ████║    ██╔════╝██║     ██║
my_mysql-init-1  | █████╔╝ ███████╗██╔████╔██║    ██║     ██║     ██║
my_mysql-init-1  | ██╔═██╗ ╚════██║██║╚██╔╝██║    ██║     ██║     ██║
my_mysql-init-1  | ██║  ██╗███████║██║ ╚═╝ ██║    ╚██████╗███████╗██║
my_mysql-init-1  | ╚═╝  ╚═╝╚══════╝╚═╝     ╚═╝     ╚═════╝╚══════╝╚═╝
my_mysql-init-1  |
my_mysql-init-1  | Current Version: 1.0.14
my_mysql-init-1  |
my_mysql-init-1  | Running in shell mode. Type 'quit' to exit.
my_mysql-init-1  |
my_mysql-init-1 exited with code 0
my_mysql-main-1  | 2022-10-31 21:35:26+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
my_mysql-main-1  | 2022-10-31 21:35:26+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
my_mysql-main-1  | 2022-10-31 21:35:26+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.31-1.el8 started.
my_mysql-main-1  | 2022-10-31 21:35:26+00:00 [Note] [Entrypoint]: Initializing database files
my_mysql-main-1  | 2022-10-31T21:35:26.830527Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
my_mysql-main-1  | 2022-10-31T21:35:26.830594Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.31) initializing of server in progress as process 83
...
my_mysql-main-1  | 2022-10-31T21:35:35.611063Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server - GPL.
my_mysql-main-1  | 2022-10-31T21:35:35.611015Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock

Contribute to the Docker Runtime Examples

If you have some great examples to contribute to this page, please ping us on Slack or email sm@keepersecurity.com.

Last updated