All pages
Powered by GitBook
1 of 1

KeeperAI

AI-powered threat detection for KeeperPAM privileged sessions

Overview

KeeperAI is an Agentic AI-powered threat detection system that automatically monitors and analyzes user sessions to identify suspicious or malicious behavior. The system, which is built using a Sovereign AI framework, works at the gateway level to generate real-time risk analyses from session recordings, helping security teams quickly detect potential threats.

Key Features

  • Automated Session Analysis: Analyze session metadata, keystroke logs, and command execution logs to detect unusual behavior

  • Search: Provides searching across the sessions to locate specific keywords or activity

  • Threat Classification: Automatically categorize detected threats and assign risk levels

  • Flexible Deployment: Support for both cloud-based and on-premises LLM inference

  • Customizable Configuration: Adjust risk parameters and detection rules to your environment

Supported Protocols

Current Support

  • SSH

Coming Soon

  • Database protocols

  • RDP

  • VNC

  • RBI

Getting Started

Prerequisites

  • PAM Gateway version 1.5.4 or newer

  • Docker environment for on-premises deployments

  • Access to LLM inference services (See supported LLM provider options below)

Activation

Activating KeeperAI on a Resource

  1. Log in to the Vault UI as an administrator

  2. Navigate to the resource management section

  3. Select the SSH-based resource you want to protect

  4. Find the "KeeperAI" section and toggle the activation switch to "On"

  5. Save your changes

KeeperAI in PAM Settings

Note: For protocols not yet supported, the UI will indicate that classification models for these protocols are coming soon.

LLM Integration

Overview

KeeperAI leverages Large Language Models (LLMs) to power its threat detection capabilities. The PAM Gateway communicates with any LLM of your choice to analyze session data and generate intelligent security insights. This integration is fundamental to KeeperAI's ability to detect suspicious patterns and provide detailed session summaries.

Supported LLM Providers

KeeperAI is designed to work with multiple LLM providers, giving you flexibility in your deployment:

OpenAI-Compatible API

Support for any API providers implementing that use OpenAI’s request and response formats for the /chat/completions endpoint. A non-exhaustive list of providers you can use:

Inference Provider
Docs

Azure AI Foundry

Azure AI Foundry

Cerebras

Cerebras

Groq

Groq

Hugging Face

Hugging Face

Keywords AI

Keywords AI

LitelLLM

LiteLLM

LM Studio

LM Studio

Ollama

Ollama

OpenRouter

OpenRouter

Tinfoil

Tinfoil

TogetherAI

TogetherAI

Unify AI

Unify AI

vLLM

vLLM

Configuration

  1. Ensure your Gateway has the appropriate permissions to access the LLM service

  2. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "openai-generic"
  KEEPER_GATEWAY_SENTRY_BASE_URL: "<your-base-url>"
  KEEPER_GATEWAY_SENTRY_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
AWS Bedrock

You can get started quickly, privately customize foundation models with your own data, and easily and securely integrate and deploy them into your applications using AWS tools without having to manage any infrastructure.

Configuration

  1. Ensure that the IAM role for the Gateway has the AmazonBedrockFullAccess policy attached

  2. Request access through AWS Console to an Amazon Bedrock foundation model

  3. Select a model from the supported list and note the corresponding model ID.

  4. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "aws-bedrock"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
Anthropic

Configuration

Before you begin, create an API key in the Anthropic Console.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "anthropic"
  KEEPER_GATEWAY_SENTRY_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
Google AI: Gemini

Configuration

Before you begin, create an API key in the Google AI dashboard.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "google-ai"
  KEEPER_GATEWAY_SENTRY_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
Google: Vertex

You need to use an account with a ProjectID that has been authorized to use Vertex. When administering your Google Cloud account, be sure to enable Vertex, and specify your project’s ID when authenticating with gcloud auth:

gcloud auth application-default login --project MY_PROJECT_ID
  • If you’re using Google Cloud application default credentials, you can expect authentication to work out of the box.

  • Setting options.credentials will take precedence and force vertex-ai to load service account credentials from that file path.

Configuration

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "vertex-ai"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
  KEEPER_GATEWAY_SENTRY_LOCATION: "<your-location>"
OpenAI

Configuration

Before you begin, create an API key in the Open AI Platform dashboard.

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "openai"
  KEEPER_GATEWAY_SENTRY_API_KEY: "<your-api-key>"
  KEEPER_GATEWAY_SENTRY_MODEL: "<your-model-id>"
Azure OpenAI

Configuration

  1. Configure the Gateway with the following environment variables for the gateway service in your Docker Compose file:

environment:
  KEEPER_GATEWAY_SENTRY_LLM_PROVIDER: "azure-openai"
  KEEPER_GATEWAY_SENTRY_RESOURCE_NAME: "<your-resource-name>"
  KEEPER_GATEWAY_SENTRY_DEPLOYMENT_ID: "<your-deployment-id>"
  KEEPER_GATEWAY_SENTRY_API_VERSION: "<your-api-version>"
  KEEPER_GATEWAY_SENTRY_API_KEY: "<your-model-id>"

Threat Detection and Response

Risk Classification

KeeperAI uses a proprietary classifier to categorize threats into risk levels:

  • Critical: Severe security threats requiring immediate action

  • High: Significant security concerns that should be addressed promptly

  • Medium: Potential security issues requiring monitoring

Automatic Response Actions

You can configure automatic responses based on detected threat levels:

  1. Navigate to the KeeperAI configuration section

  2. Define pattern matching keywords using regex

  3. Assign these patterns to Critical, High, or Medium threat levels

  4. Optionally enable automatic session termination for specific threat levels

Reviewing Session Summaries

Each analyzed session receives an AI-generated summary:

  1. Access the Session Recordings section in the Vault UI

  2. Select a session with KeeperAI analysis

  3. View the risk assessment, including:

    • Overall risk level

    • Detected threat categories

    • Detailed session summary

    • Timeline of suspicious activities

AI Session Summary and Risk Assessment

Advanced Configuration

Customizing Detection Parameters

Adjust the sensitivity and specifics of threat detection:

  1. Access the KeeperAI configuration page

  2. Modify the threshold settings for different threat categories

  3. Update keyword patterns for specific threats

  4. Save your configuration changes

Integration with ARAM Events

KeeperAI automatically generates ARAM events for detected threats, enabling integration with your existing security workflow.

Troubleshooting

Common Issues

  • Missed Detections: Adjust sensitivity thresholds or add custom keyword patterns

  • False Positives: Refine pattern matching rules or adjust risk thresholds

  • Performance Issues: Check resource allocation for on-premises LLM deployments

Support Resources

For additional assistance with KeeperAI, email pam@keepersecurity.com.

FAQ

Q: Can I use my own LLM model with KeeperAI? A: Yes, KeeperAI supports any provider implementing the OpenAI /chat/completions API endpoint

Q: Does KeeperAI work in real-time? A: Yes, KeeperAI can analyze both real-time sessions and completed session recordings using the same analysis logic.

Q: How does KeeperAI handle sensitive information? A: In a later release, KeeperAI will include Personally Identifiable Information (PII) detection and removal from session summaries.