KeeperAI
AI-powered threat detection for KeeperPAM privileged sessions

Overview
KeeperAI is an Agentic AI-powered threat detection system that automatically monitors and analyzes KeeperPAM privileged sessions to identify suspicious or malicious behavior. The system, built using a Sovereign AI framework performs all analysis at the gateway level - analyzing each command as it's entered and creates an encrypted session summary for later review. This helps security teams quickly detect potential threats during active privileged sessions.
Video Overview:
KeeperAI Product Page:
Key Features
Automated Session Analysis: Analyze session metadata, keystroke logs, and command execution logs to detect unusual behavior
Session Search: Search across privileged sessions to locate specific keywords or activity
Threat Classification: Automatically categorize detected threats and assign risk levels
Flexible Deployment: Support for both third-party, cloud-based, and on-premises LLM inference
Customizable Configuration: Adjust risk parameters and detection rules to your environment
Supported Protocols
Current Support
SSH
Coming Soon:
Database protocols (MySQL/PostgreSQL)
RDP
VNC
RBI
Setup Steps
Install the latest Keeper Gateway version 1.7.0 or newer
Access to LLM inference services (either cloud or self-hosted)
Activated LLM provider in your Keeper Gateway deployment
Set up the PAM Configuration to allow KeeperAI
Activate KeeperAI on the resource
Follow the instructions below to set up your Keeper Gateway with your preferred LLM provider.
PAM Configuration Settings
Go to PAM Configuration under the Keeper Secrets Manager tab.
Select your resource and scroll to the KeeperAI Features section.
Toggle the setting to enable.

Activating Threat Detection on a Resource
Edit PAM Settings for your selected resource.
Go to the Connections tab.
Enable all options under Session Recording.

Navigate to the KeeperAI tab and switch on the Enable KeeperAI toggle.

By default, KeeperAI automatically classifies commands into the appropriate Risk Level categories.
To enforce stronger controls, you can enable Terminate Session for a given risk level. When active, any command classified at that level will immediately end the session.

KeeperAI Exceptions & Custom Rules
Use the Exceptions pop-up to customize how specific keywords or patterns are classified. Add from the provided dropdown examples, or enter your own plain text or regex strings.

Reviewing Session Summaries
KeeperAI generates AI-powered summaries for each recorded session, helping security teams quickly review and understand user activity. To view a summary, open the options menu for the monitored resource and select Session Activity.

Open Analysis: Click on a session row to launch the Session Analysis popup, showing detailed summaries of each command executed.
Playback: Click the Play button to watch the full session recording in real time.
Download: Use the Download button to save session recording files locally for offline review.
When downloading session recording files locally, please note that these files will be unencrypted and may contain sensitive information. Ensure you store and handle these files securely according to your organization's data protection policies.



Notes
By default AI will make its best effort to classify commands into proper Risk Levels categories
Enable "Terminate Session" for a risk level if you wish to allow classified commands to trigger a session termination for the selected risk level
If you have specific pattern-matching keywords you may open the Exceptions popup to customize the risk level classification and policy on detection
Risk Classifications
KeeperAI will categorize commands into risk levels for threat detection:
Critical: Severe security threats requiring immediate action
High: Significant security concerns that should be addressed promptly
Medium: Potential security issues requiring monitoring
Low: Normal or benign behavior that does not require monitoring
LLM Integration
Overview
KeeperAI leverages Large Language Models (LLMs) to power its threat detection capabilities. The Keeper Gateway communicates with any LLM of your choice to analyze session data and generate intelligent security insights. This integration is fundamental to KeeperAI's ability to detect suspicious patterns and provide detailed session summaries.
Disclaimer: AI predictions are inherently probabilistic and may not always be accurate. The selection of LLM providers and models is made at the user's discretion, and KeeperAI cannot guarantee that the AI will fully understand or correctly interpret tasks. Users are encouraged to exercise caution and validate AI outputs as part of their decision-making processes.
LLM Provider Setup Instructions
KeeperAI is designed to work with multiple LLM providers, giving you flexibility in your deployment. Self-hosted and cloud-based LLMs are compatible. If you have any questions or would like to know more about a LLM Provider please email us at [email protected] we'll quickly assist you.
Docker Installation Method
OpenAI-Compatible API
Support for any API providers implementing that use OpenAI’s request and response formats for the /chat/completions endpoint.
Configuration
Ensure your Gateway has the appropriate permissions to access the LLM service
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
The KEEPER_GATEWAY_AI_BASE_URL must include a valid protocol prefix (http:// or https://). If the protocol is missing, Keeper Gateway will throw a configuration error during startup.
For example:
✅ https://your-llm-provider.com/v1
❌ your-llm-provider.com/v1
A non-exhaustive list of providers you can use:
AWS Bedrock
You can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.
Configuration
Ensure that the IAM role for the Gateway has the
AmazonBedrockFullAccesspolicy attachedRequest access through AWS Console to an Amazon Bedrock foundation model
Select a model from the supported list and note the corresponding model ID.
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
Anthropic
Configuration
Before you begin, create an API key in the Anthropic Console.
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
Google AI: Gemini
Configuration
Before you begin, create an API key in the Google AI dashboard.
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
Google: Vertex
You need to use an account with a ProjectID that has been authorized to use Vertex. When administering your Google Cloud account, be sure to enable Vertex, and specify your project’s ID when authenticating with gcloud auth:
If you’re using Google Cloud application default credentials, you can expect authentication to work out of the box.
Setting
options.credentialswill take precedence and forcevertex-aito load service account credentials from that file path.
Configuration
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
OpenAI
Configuration
Before you begin, create an API key in the Open AI Platform dashboard.
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
Azure OpenAI
Configuration
Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:
Native Installation Method
Windows Installation Instructions
To configure the environment variables for the Keeper Gateway service on Windows, follow these steps:
Open PowerShell as Administrator and Set the variables at the Machine Scope
Restart the Gateway service so it picks up the new environment:
Linux Installation Instructions
To configure the environment variables for the Keeper Gateway service on Linux, follow these steps:
Edit the systemd service file:
Extend the Environment= line with your required environment variables based on the supported LLM Providers above.
Reload the daemon and restart the gateway service
Reviewing Session Summaries
Access Session Recordings
Each analyzed session receives an AI-generated summary:
Access the Session Recordings section in the Vault UI
Right click on the record or click on the options icon
⋮and select "Session Activity"
Click on a session row with KeeperAI analysis to open the Session Analysis popup for detailed summaries of each command executed during the session
Click on the play button to watch the session recording to see session playback in realtime
Click on the download button to save the session recording files locally
When downloading session recording files locally, please note that these files will be unencrypted and may contain sensitive information. Ensure you store and handle these files securely according to your organization's data protection policies.


Integration with ARAM Events
KeeperAI automatically generates ARAM events for detected threats and resource configurations, enabling integration with your existing security workflow.
Troubleshooting
Common Issues
Missed Detections: Adjust sensitivity thresholds in the risk level settings or add custom keyword patterns through the Exceptions popup
False Positives: Refine pattern matching rules or lower risk thresholds for specific commands using custom exceptions
Performance Issues: Check resource allocation for on-premises LLM deployments and verify network connectivity to your LLM provider
Session Analysis Not Appearing: Ensure KeeperAI is enabled on both the Keeper Gateway configuration and the individual resource
Download session recording files to check for
summary.jsonNo
summary.jsonfile means KeeperAI was not enabled for that sessionCorrupted or incomplete
summary.jsonindicates error may have occurred during final processing - contact support
Sessions recorded before KeeperAI activation will not have analysis data
LLM Connection Errors: Verify your LLM provider credentials and endpoint configuration in the gateway settings
Support Resources
For additional assistance with KeeperAI, email [email protected].
FAQ
Q: Can I use my own LLM model with KeeperAI?
A: Yes, KeeperAI supports any provider implementing the OpenAI /chat/completions API endpoint
Q: Does KeeperAI work in real-time? A: Yes, KeeperAI analyzes privileged sessions in real-time after each user entry and saves completed session recordings and analysis in encrypted files for later review.
Q: How does KeeperAI handle sensitive information? A: KeeperAI stores session recordings and analysis in encrypted files. In a future release, KeeperAI will include enhanced Personally Identifiable Information (PII) detection with options to remove PII before sending to the LLM or remove PII from LLM responses.
Q: How does data flow between the Gateway, LLM provider, and Keeper's systems? A: KeeperAI uses a secure, multi-step communication flow to ensure data privacy and security: 1. Gateway ↔ LLM Provider: The Keeper Gateway communicates directly with your configured LLM provider via encrypted HTTPS to analyze session commands in real-time 2. Gateway → Keeper: After receiving the LLM analysis, the Gateway encrypts all session data and analysis results using a unique record key before transmitting to Keeper's endpoint for storage.
Q: Can I run KeeperAI in air-gapped environments? A: Yes, using on-premises LLM deployment, you can interact with a local service instead of third-party or internet-accessible services.
Q: What's the expected cost per session analysis? A: To help calculate costs, our risk analysis prompts used for each command are approximately 550 tokens and final summary prompts that summarize all commands are around 400 tokens, excluding user input command context. Additional tokens will be used depending on the context and length of the input commands.
Q: What data is sent to third-party LLM providers, and how is it protected? A: Command text is sent via encrypted HTTPS to your configured LLM provider. The LLM response is then encrypted before being saved to S3. All traffic occurs directly from the Gateway to the LLM provider. To maintain zero-knowledge and zero-trust no traffic is ever sent to Keeper without first being encrypted by your private key.
Q: Can I export threat detection data for compliance reporting? A: Yes, session analysis data can be exported in JSON format from the Session Analysis popup for compliance reporting purposes.
Last updated
Was this helpful?

