Import Connections from KCM

Import KCM Connections and Connection Groups into an existing PAM Project.

Conversion Process

This process involves a Python script to automatically connect to a KCM database, export connections and connection groups, and convert the data to a valid JSON template for the PAM Project Extend command.

  1. The Python script can be found herearrow-up-right - download both kcm_export.py and KCM_mappings.json files.

  2. The kcm_export.py script will connect to the KCM database (locally or remotely), and pull connection and connection group data.

  3. The data will be converted into parameters compliant for the PAM Project JSON template, using a dictionary of mappings in KCM_mappings.json.

  4. Connection groups will be mapped as shared folders or nested folders - see Connection Group Modelling for more information.

  5. After completing the conversion, shared folders can be added to a Gateway Application in Keeper, before running the PAM Project Extend command. The final JSON file will contain a list of shared folders to create, as well as any connection names that have been found to include Dynamic Tokens.

Allowing Connections to the KCM Database

You KCM database must have an open port for the Python script to successfully connect to it:

On a KCM Auto-Docker Install

With the Auto-Docker installation, your docker-compose.yml file is found in /etc/kcm-setup. You can edit this file as follows:

sudo nano /etc/kcm-setup/docker-compose.yml
db:
        image: keeper/guacamole-db-mysql:2
        restart: unless-stopped 
        environment:
            ACCEPT EULA: "Y"
            GUACAMOLE_DATABASE: "guacamole_db"
            GUACAMOLE_USERNAME: "guacamole_user"
            GUACAMOLE_PASSWORD: "XXXXXXXXXXXXXXXXXXXXXXXXX"
            GUACAMOLE_ADMIN_PASSWORD: "XXXXXXXXXXXXXXXXXXXXXXXXX"
            MYSQL_ROOT_PASSWORD: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
        ports:
            - "3306:3306"            

Next, apply the changes:

Optionally, if you're running the Python script from a user other than root, you will need to change reading permissions to the docker-compose.yml file so that Python can access it:

On a Docker-Compose Install

With the Auto-Docker installation, you can make the same update to your docker-compose.yml file as shown above, then restart the Docker container:

For an External PostgreSQL Database

If your KCM host is connected to a remote database, you should already have ports open, however with PostgreSQL, the database can only be accessible to connections allowed in the pg_hba.conf file. Make sure whichever location the Python script is running from is allowed.

Installing Modules and Running the Script

From the same directory as the kcm_export.py and KCM_mappings.json files, create and activate a Python virtual environment:

Next, install Python modules necessary for the script:

  • The pyYAML module will be used to read the docker-compose.yml file.

  • The mysql-connector / psycopg2-binary modules will be used to connect to the database.

  • The rich module is only used for Console styling and readability - it is optional, however if you wish to exclude it you will need to delete the import statements for the module from the kcm_export.py file.

Finally, run the script:

Script CLI Process

The script will take you through an series of CLI prompts:

  • On the first prompt, enter your target database type:

PAM KCM import - database prompt
  • On the second prompt, select 1 if your docker-compose.yml file is stored locally - which will then prompt for the file location. If your database is remote and the docker-compose.yml file isn't on the local device, you can hardcode the connection details in the kcm_export.py file before running the script and select 2.

PAM KCM Import - Database connection prompt
PAM KCM Import - Connection group structure prompt
  • (Optional) Finally, you will be prompted to choose whether you want to use a template JSON file along with your conversion. This file is only relevant if you want to set blanket parameters on all your resources / users.

PAM KCM Import - template prompt

This template file contains:

  • A pamDirectory resource and its user(s) - these are harcoded in the template.

  • A pamMachine resource and its user - these will be used as template resource and user for the conversion script. Any field added in those objects that isn't directly overridden by KCM parameters will be added to the converted object.

Importing the File with Commander

After this last prompt, a pam_import.json file should be created in your directory. This file can be imported using the PAM Project Extend command in Commander (see guide):

circle-info

Don't forget to add the necessary shared folders generated from your connection groups.

A list of them is printed in the CLI, and added to the pam_import.json file.

If any KCM connection includes Dynamic Tokens, they will be included in the file's logged_records. See KCM Mappings and Logged Records for more details.

Connection Group Modelling

Folder Path Generation

Connection groups will be modelled as folder paths on the JSON template.

In KCM, Connections include the user credentials, but in PAM, those are typically separated in different shared folders. The script will separate resources and users into separate folder paths - prefixed KCM Resources - and KCM Users - for the root folder of the path.

For instance, if you have the following folder path for a connection in KCM:

  • Infra / Site 01 / Windows RDP

The script will assign the following folder paths for the resource and user records respectively:

  • KCM Resources - Infra / Site 01 / KCM Resource - Windows RDP

  • KCM Resources - Infra / Site 01 / KCM User - Windows RDP

The root folder and the title of the resource and user are prefixed, however any nested folders names are preserved.

Folder Structure

As shown in the multiple-choice prompt above, the script offers three ways of parsing KCM connection groups into folder paths:

  • KSM Configuration Based

The folder paths will largely follow the structure of KCM's connection groups, with the root folder being a shared folder and nested folders personal folders. However, if a connection group includes a KSM service configuration, it will be set as a root folder.

This structure infers that connection groups with KSM service configurations should be segmented, and therefore should be modelled as root shared folder to allow for granular sharing.

  • Keep Exact Nesting

The folder paths will folle the structure of KCM's connection groups exactly.

  • Flat

The nesting will be destroyed - every connection group from KCM will be modelled as a root shared folder.

KCM Mappings and Logged Records

The output JSON file generated by the script may include a list of logged_records. Connections found to include Dynamic Tokens will appear in this list. However you can also log other records based on specific parameters.

The script runs alongside a KCM_mappings.json file, which is a dictionary of mappings between KCM parameters and PAM Project parameters. Unsupported parameters are mapped as null, while parameters that don't apply to PAM are mapped as ignore. By mapping a parameter as log, a connection found with said parameters would be added to your logged_records output.

This could be useful if you need to track specific parameters for rework after the import.

SFTP Handling

In KCM, SFTP parameters include the host and credentials. In PAM, this is handled differently, where you can point to a pamMachine / pamUser record.

If a connection is found to include SFTP parameters, the script will automatically create a new pamMachine and pamUser with port 22, and store it in a folder labelled SFTP one level down from where the connection is.

Debugging

You can enable debugging by changing the DEBUG constant to True at the head of the kcm_export.py file.

Last updated

Was this helpful?