KeeperAI

AI-powered threat detection for KeeperPAM privileged sessions

Overview

KeeperAI is an Agentic AI-powered threat detection system that automatically monitors and analyzes KeeperPAM privileged sessions to identify suspicious or malicious behavior. The system, built using a Sovereign AI framework performs all analysis at the gateway level - analyzing each command as it's entered and creates an encrypted session summary for later review. This helps security teams quickly detect potential threats during active privileged sessions.

KeeperAI Product Page:

Key Features

  • Automated Session Analysis: Analyze session metadata, keystroke logs, and command execution logs to detect unusual behavior

  • Session Search: Search across privileged sessions to locate specific keywords or activity

  • Threat Classification: Automatically categorize detected threats and assign risk levels

  • Flexible Deployment: Support for both third-party, cloud-based, and on-premises LLM inference

  • Customizable Configuration: Adjust risk parameters and detection rules to your environment

Supported Protocols

Current Support

  • SSH

Coming Soon:

  • Database protocols (MySQL/PostgreSQL)

  • RDP

  • VNC

  • RBI

Setup Steps

  1. Install the latest Keeper Gateway version 1.7.0 or newer

  2. Access to LLM inference services (either cloud or self-hosted)

  3. Activated LLM provider in your Keeper Gateway deployment

  4. Set up the PAM Configuration to allow KeeperAI

  5. Activate KeeperAI on the resource

Follow the instructions below to set up your Keeper Gateway with your preferred LLM provider.


PAM Configuration Settings

  1. Go to PAM Configuration under the Keeper Secrets Manager tab.

  2. Select your resource and scroll to the KeeperAI Features section.

  3. Toggle the setting to enable.


Activating Threat Detection on a Resource

  1. Edit PAM Settings for your selected resource.

  2. Go to the Connections tab.

  3. Enable all options under Session Recording.

  1. Navigate to the KeeperAI tab and switch on the Enable KeeperAI toggle.

By default, KeeperAI automatically classifies commands into the appropriate Risk Level categories.

To enforce stronger controls, you can enable Terminate Session for a given risk level. When active, any command classified at that level will immediately end the session.

KeeperAI Exceptions & Custom Rules

Use the Exceptions pop-up to customize how specific keywords or patterns are classified. Add from the provided dropdown examples, or enter your own plain text or regex strings.


Reviewing Session Summaries

KeeperAI generates AI-powered summaries for each recorded session, helping security teams quickly review and understand user activity. To view a summary, open the options menu for the monitored resource and select Session Activity.

  • Open Analysis: Click on a session row to launch the Session Analysis popup, showing detailed summaries of each command executed.

  • Playback: Click the Play button to watch the full session recording in real time.

  • Download: Use the Download button to save session recording files locally for offline review.


Notes

  • By default AI will make its best effort to classify commands into proper Risk Levels categories

  • Enable "Terminate Session" for a risk level if you wish to allow classified commands to trigger a session termination for the selected risk level

  • If you have specific pattern-matching keywords you may open the Exceptions popup to customize the risk level classification and policy on detection

Risk Classifications

KeeperAI will categorize commands into risk levels for threat detection:

  • Critical: Severe security threats requiring immediate action

  • High: Significant security concerns that should be addressed promptly

  • Medium: Potential security issues requiring monitoring

  • Low: Normal or benign behavior that does not require monitoring


LLM Integration

Overview

KeeperAI leverages Large Language Models (LLMs) to power its threat detection capabilities. The Keeper Gateway communicates with any LLM of your choice to analyze session data and generate intelligent security insights. This integration is fundamental to KeeperAI's ability to detect suspicious patterns and provide detailed session summaries.

LLM Provider Setup Instructions

KeeperAI is designed to work with multiple LLM providers, giving you flexibility in your deployment. Self-hosted and cloud-based LLMs are compatible. If you have any questions or would like to know more about a LLM Provider please email us at [email protected] we'll quickly assist you.

Docker Installation Method

OpenAI-Compatible API

Support for any API providers implementing that use OpenAI’s request and response formats for the /chat/completions endpoint.

Configuration

  1. Ensure your Gateway has the appropriate permissions to access the LLM service

  2. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "openai-generic"
  KEEPER_GATEWAY_AI_BASE_URL: "<your-base-url>"
  KEEPER_GATEWAY_AI_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"

The KEEPER_GATEWAY_AI_BASE_URL must include a valid protocol prefix (http:// or https://). If the protocol is missing, Keeper Gateway will throw a configuration error during startup.

For example:

https://your-llm-provider.com/v1your-llm-provider.com/v1

A non-exhaustive list of providers you can use:

Inference Provider
Resources
Infrastructure

Ask Sage

SaaSSelf-Hosted

Azure AI Foundry

SaaS

Cohere

SaaSSelf-Hosted

Cerebras

SaaS

Fireworks AI

SaaS

Featherless AI

SaaS

Groq

SaaS

Grok

SaaS

Hyperbolic

SaaS

Hugging Face

SaaS

Keywords AI

SaaSSelf-Hosted

LiteLLM

SaaSSelf-Hosted

LM Studio

Self-Hosted

Nebius

SaaS

Novita

SaaS

NScale

SaaS

Ollama

SaaSSelf-Hosted

OpenRouter

SaaS

SambaNova

SaaS

Tinfoil

SaaS

TogetherAI

SaaS

Unify AI

SaaSSelf-Hosted

Vercel AI Gateway

SaaS

vLLM

Self-Hosted
AWS Bedrock

You can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.

Configuration

  1. Ensure that the IAM role for the Gateway has the AmazonBedrockFullAccess policy attached

  2. Request access through AWS Console to an Amazon Bedrock foundation model

  3. Select a model from the supported list and note the corresponding model ID.

  4. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "aws-bedrock"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"
  AWS_REGION: "<your-aws-region>"
Anthropic

Configuration

Before you begin, create an API key in the Anthropic Console.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "anthropic"
  KEEPER_GATEWAY_AI_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"
Google AI: Gemini

Configuration

Before you begin, create an API key in the Google AI dashboard.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "google-ai"
  KEEPER_GATEWAY_AI_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"
Google: Vertex

You need to use an account with a ProjectID that has been authorized to use Vertex. When administering your Google Cloud account, be sure to enable Vertex, and specify your project’s ID when authenticating with gcloud auth:

gcloud auth application-default login --project MY_PROJECT_ID

Configuration

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "vertex-ai"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"
  KEEPER_GATEWAY_AI_LOCATION: "<your-location>"
OpenAI

Configuration

Before you begin, create an API key in the Open AI Platform dashboard.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "openai"
  KEEPER_GATEWAY_AI_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_AI_MODEL: "<your-model-id>"
Azure OpenAI

Configuration

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_AI_LLM_PROVIDER: "azure-openai"
  KEEPER_GATEWAY_AI_RESOURCE_NAME: "<your-resource-name>"
  KEEPER_GATEWAY_AI_DEPLOYMENT_ID: "<your-deployment-id>"
  KEEPER_GATEWAY_AI_API_VERSION: "<your-api-version>"
  KEEPER_GATEWAY_AI_API_KEY: "<your-api-key>"

Native Installation Method

Windows Installation Instructions

To configure the environment variables for the Keeper Gateway service on Windows, follow these steps:

Open PowerShell as Administrator and Set the variables at the Machine Scope

setx KEEPER_GATEWAY_AI_LLM_PROVIDER "<your_provider_name>" /M
setx KEEPER_GATEWAY_AI_BASE_URL "<your-base-url>" /M
setx KEEPER_GATEWAY_AI_API_KEY "<your-api-key>" /M
setx KEEPER_GATEWAY_AI_MODEL "<your-model-id>" /M

Restart the Gateway service so it picks up the new environment:

Restart-Service -DisplayName "Keeper Gateway Service"
Linux Installation Instructions

To configure the environment variables for the Keeper Gateway service on Linux, follow these steps:

Edit the systemd service file:

sudo vi /etc/systemd/system/keeper-gateway.service

Extend the Environment= line with your required environment variables based on the supported LLM Providers above.

Environment=KEEPER_GATEWAY_AI_LLM_PROVIDER="<your_provider_name>"
Environment=KEEPER_GATEWAY_AI_BASE_URL="<your-base-url>"
Environment=KEEPER_GATEWAY_AI_API_KEY="<your-api-key>"
Environment=KEEPER_GATEWAY_AI_MODEL="<your-model-id>"

Reload the daemon and restart the gateway service

# reload the daemon
sudo systemctl daemon-reload

# optionally you can validate the environment variables are setup properly
sudo systemctl show keeper-gateway.service | grep Environment=

# restart the keeper gateway service
sudo systemctl restart keeper-gateway.service

Reviewing Session Summaries

Access Session Recordings

Each analyzed session receives an AI-generated summary:

  1. Access the Session Recordings section in the Vault UI

    1. Right click on the record or click on the options icon and select "Session Activity"

  2. Click on a session row with KeeperAI analysis to open the Session Analysis popup for detailed summaries of each command executed during the session

  3. Click on the play button to watch the session recording to see session playback in realtime

  4. Click on the download button to save the session recording files locally

Session Activity Popup
Session Analysis Popup

Integration with ARAM Events

KeeperAI automatically generates ARAM events for detected threats and resource configurations, enabling integration with your existing security workflow.


Troubleshooting

Common Issues

  • Missed Detections: Adjust sensitivity thresholds in the risk level settings or add custom keyword patterns through the Exceptions popup

  • False Positives: Refine pattern matching rules or lower risk thresholds for specific commands using custom exceptions

  • Performance Issues: Check resource allocation for on-premises LLM deployments and verify network connectivity to your LLM provider

  • Session Analysis Not Appearing: Ensure KeeperAI is enabled on both the Keeper Gateway configuration and the individual resource

    • Download session recording files to check for summary.json

      • No summary.json file means KeeperAI was not enabled for that session

      • Corrupted or incomplete summary.json indicates error may have occurred during final processing - contact support

    • Sessions recorded before KeeperAI activation will not have analysis data

  • LLM Connection Errors: Verify your LLM provider credentials and endpoint configuration in the gateway settings

Support Resources

For additional assistance with KeeperAI, email [email protected].


FAQ

Q: Can I use my own LLM model with KeeperAI? A: Yes, KeeperAI supports any provider implementing the OpenAI /chat/completions API endpoint

Q: Does KeeperAI work in real-time? A: Yes, KeeperAI analyzes privileged sessions in real-time after each user entry and saves completed session recordings and analysis in encrypted files for later review.

Q: How does KeeperAI handle sensitive information? A: KeeperAI stores session recordings and analysis in encrypted files. In a future release, KeeperAI will include enhanced Personally Identifiable Information (PII) detection with options to remove PII before sending to the LLM or remove PII from LLM responses.

Q: How does data flow between the Gateway, LLM provider, and Keeper's systems? A: KeeperAI uses a secure, multi-step communication flow to ensure data privacy and security: 1. Gateway ↔ LLM Provider: The Keeper Gateway communicates directly with your configured LLM provider via encrypted HTTPS to analyze session commands in real-time 2. Gateway → Keeper: After receiving the LLM analysis, the Gateway encrypts all session data and analysis results using a unique record key before transmitting to Keeper's endpoint for storage.

Q: Can I run KeeperAI in air-gapped environments? A: Yes, using on-premises LLM deployment, you can interact with a local service instead of third-party or internet-accessible services.

Q: What's the expected cost per session analysis? A: To help calculate costs, our risk analysis prompts used for each command are approximately 550 tokens and final summary prompts that summarize all commands are around 400 tokens, excluding user input command context. Additional tokens will be used depending on the context and length of the input commands.

Q: What data is sent to third-party LLM providers, and how is it protected? A: Command text is sent via encrypted HTTPS to your configured LLM provider. The LLM response is then encrypted before being saved to S3. All traffic occurs directly from the Gateway to the LLM provider. To maintain zero-knowledge and zero-trust no traffic is ever sent to Keeper without first being encrypted by your private key.

Q: Can I export threat detection data for compliance reporting? A: Yes, session analysis data can be exported in JSON format from the Session Analysis popup for compliance reporting purposes.

Last updated

Was this helpful?