Reporting Commands

Commands for audit logging and reporting capabilities

Commands

Keeper Command Reference

Whether using the interactive shell, CLI or JSON config file, Keeper supports the following commands, each command supports additional parameters and options.

To get help on a particular command, run:

help <command>

Reporting Commands

CommandExplanation

Show users that haven't performed a specific action in a given number of days

Determine which record passwords have NOT been changed in X days

Export the enterprise audit and event logs

Show a customized report of audit events

See information about records in vaults of users across the enterprise

Display information about records shared with external accounts

Display information on managed company plans and available licenses

Show report of password security strength for each user in the enterprise

Show report of Breachwatch security scores for each user in the enterprise

Display information about shared records

Show a report of shared records

Show a report of user logins

audit-log command

command: audit-log

Detail: Download event data from the Advanced Reporting & Alerts Module ("ARAM") to your local Commander instance, and then push the events to a SIEM provider. For a fully automated process, we recommend using the cloud-based SIEM export available in the Keeper Admin Console. For more information about the automated export see this link.

The audit-log command provides a SIEM push capability if the Keeper backend servers are not able to access the target endpoint. It can also be useful if you would like to just export events locally to a JSON file. Note that a Keeper record is used for storing the state of event exports, so that repeated use of the audit-log command will pick up where it left off.

Running audit-log for the first time will take a long time to process if there is a lot of usage history with Keeper, since it's starting from the beginning of time.

Switches:

--anonymize Anonymizes audit log by replacing email and user name with corresponding enterprise user id. If user was removed or if user's email was changed then the audit report will show that particular entry as deleted user.

--target <{splunk, sumo, syslog, syslog-port, azure-la, json}> Choose export target

  • splunk - Export events to Splunk HTTP Event Collector

  • sumo - Export events to Sumo Logic HTTP Event Collector

  • syslog - Export events to a local file in syslog format

  • syslog-port - Export events in syslog format to TCP port. Both plain and SSL connections are supported

  • azure-la - Export events to Azure Log Analytics to custom log named Keeper_CL

  • json - Export events to a local file in JSON format

--record <RECORD NAME OR UID> Select a Keeper record to store the current export state. This will also contain the secrets needed to connect to the target endpoint, such as Splunk parameters.

--shared-folder-uid <SHARED FOLDER UID> Filter: Shared Folder UID(s). Overrides existing setting in config record and sets new field value.

--node-id <NODE ID> Filter: Node ID(s). Overrides existing setting in config record and sets new field value.

--days <DAYS> Filter: Max event age in days. Overrides existing "last_event_time" value in config record.

Examples:

audit-log --target=splunk
audit-log --record BhRRhjeL4armInSMqv2_zQ --target=json
audit-log --record=audit-log-json --target=json --days=30 --node-id=368009977790518
audit-log --shared-folder-uid=8oGAJPplH2DFUQ0obwox7Q --target=splunk --record=Splunk-Log
  1. Download all event data and push to Splunk endpoint HTTP event collector. Will be prompted to enter Splunk HEC endpoint.

  2. Download all event data referencing the record UID and output in JSON format.

  3. Get event data within the past 30 days for node with ID = 368009977790518, using the vault record named audit-log-json to obtain additional query parameters and update them as necessary.

  4. Get event data for the shared-folder with UID = 8oGAJPplH2DFUQ0obwox7Q and export to Splunk HTTP Event Collector, using the custom field values set in the Splunk-Log record to both populate event-data query parameters and to provide info (e.g. HEC endpoint, credentials) needed to export to the appropriate Splunk account.

audit-report command

Running reports requires the ARAM add-on.

Command: audit-report

Details: Generate advanced ad-hoc customized audit event reports in raw and summarized formats.

Event Properties:

  id                event ID
  created           event time
  username          user that created audit event
  to_username       user that is audit event target
  from_username     user that is audit event source
  ip_address        IP address
  geo_location      location
  audit_event_type  audit event type
  keeper_version    Keeper application
  channel           2FA channel
  status            Keeper API result_code
  record_uid        Record UID
  shared_folder_uid Shared Folder UID
  node_id           Node ID (enterprise events only)
  team_uid          Team UID (enterprise events only)

Parameters:

Limit results to rows containing specified string. Optional.

Switches:

--report-type <{raw, dim, hour, day, week, month, span}> type of report to show (Default raw)

  • raw - all audit events. Optionally use --report-format to change the format (Default list)

  • hour - show a report summarized by hour

  • day - show a report summarized by day

  • week - show a report summarized by week

  • month - show a report summarized by month

  • span - show a table of audit events with only number of occurrences shown by default. use --columns to add additional columns

  • dim - see list of all available values and details for a specified column. Include multiple columns to show detail lists one after the other. Must use --columns

--report-format <{message, fields}> choose output format (for "raw" reports only)

  • message - (default) show columns:

    • created

    • audit_event_type

    • username

    • ip_address

    • keeper_version

    • geo_location

    • message

  • fields - show columns:

    • created

    • audit_event_type

    • username

    • ip_address

    • keeper_version

    • geo_location

    • record_uid

--columns <COLUMN> decide what columns to show (ignored for "raw" reports)

available columns:

  • audit_event_type

  • username

  • ip_address

  • keeper_version

  • geo_location

  • message

  • created

  • record_uid

  • record_name (Requires Compliance Reports module *)

Usage: use the full switch for each column

audit-report --report-type day --columns username --columns ip_address

--aggregate <{occurrences, first_created, last_created}> aggregated value. Can be repeated. (ignored for raw reports)

  • occurrences - show number of times the event type took place

  • first_created - show the time the first occurrence of the event type took place

  • last_created - show the time the most recent occurrence of the event type took place

Usage: use the full switch for each aggregation you would like to show

audit-report --report-type day --aggregate occurrences --aggregate last_created

--timezone <TIMEZONE> return results for specific timezone

--limit <NUMBER OF ROWS> maximum number of returned rows (default 50, -1 for unlimited)

--order <{desc, asc}> sort order. Sorts based on the first returned column

--created <CREATED DATE> filter results by created date

Format:

use a predefined filter value from the list below:

  • today

  • yesterday

  • last_7_days

  • last_30_days

  • month_to_date

  • last_month

  • year_to_date

  • last_year

or use the following format for a custom date range:

"between %Y-%m-%dT%H:%M:%SZ and %Y-%m-%dT%H:%M:%SZ"

example: "between 2022-01-01 and 2022-06-01"

--event-type <EVENT CODE> filter by event type. See a list of all available event types here

--username <USERNAME> filter by username

--to-username <TARGET'S USERNAME> filter by event's target

--record-uid <RECORD UID> filter by record

--shared-folder-uid <SHARED FOLDER UID> filter by shared folder

--geo-location <GEO LOCATION> filter by geo location. Run the following command to get the list of available geo locations

audit-report --report-type=dim --columns=geo_location

--ip-address <IP Address> filter by IP Address

Geo location filter has format "[[City, ] State,] Country"

Example:

"El Dorado Hills, California, US"

"Bayern,DE" - Bavaria, Germany

"CA" - Canada

--ip-address <IP Address> filter by the user's IP Address. For example, to find the last 5,000 events coming from a particular IP:

audit-report --report-type raw --format csv --limit 5000 --ip-address 11.22.33.44

--device-type <DEVICE TYPE> Keeper device/application and optional version

audit-report --report-type=dim --columns=device_type

Device type filter has format "DeviceType[, Version]"

Example:

"Commander" Keeper Commander all versions

"Web App, 16" The Web Vault application versions "16.x.x"

--format <table, csv, json> format of the output, table is default

--max-record-details allow retrieval of additional record-detail data if not found in local cache

Examples

audit-report --report-type raw --limit 5000
audit-report --report-type raw --report-format fields
audit-report --report-type dim --column audit_event_type
audit-report --report-type day --column username --column audit_event_type
audit-report --report-type hour --aggregate occurrences --column audit_event_type --created today
audit-report --report-type=span --event-type=record_add --event-type=record_update --username=user@mydomain.com --column=record_uid --created=last_30_days
audit-report --format=table --report-type=day --event-type=open_record --columns username --columns ip_address --columns record_uid --columns record_title --created today
audit-report --report-type=span --event-type=open_record --column=username --column=record_title --column=record_url --aggregate=last_created --max-record-details
  1. Display an audit report with all events, including messages for each event, showing the last 5000 events.

  2. Display an audit report with all events, including the record UID for each event

  3. Show all available audit event types

  4. Display an audit report that includes the event type and username, summarized by day

  5. Display an audit report of the number of each event type that was performed per hour today

  6. Display UIDs of all records created or updated by user@mydomain.com within the last 30 days.

  7. Display all occurrences of opened record events that occurred today, including the decrypted record title.

  8. Display the last time each user accessed each record (w/ its title and associated URL) and force additional retrieval -- a potentially time- and resource-intensive process -- of each record's details (e.g., title, URL) if that data is not available locally via cache.

user-report command

Command: user-report

Details: Generate ad-hoc user status report

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

--days <NUMBER OF DAYS> number of days to look back for last login date

--last-login show the last time each user logged in

Examples:

user-report
user-report --format csv --output logins.csv --days 30
user-report --format csv --output last-logins.csv 
  1. Show user login report for the past 365 days

  2. Create a user report of the last 30 days and output it in csv format to a file named "logins.csv"

  3. Create a report of the last time each user in the enterprise logged in and save it to the file "last-logins.csv"

security-audit-report command

Command: security-audit-report

Details: Generate a password security strength report for users of your enterprise

Report Columns:

My Vault> security-audit-report --syntax-help

Security Audit Report Command Syntax Description:

Column Name       Description
  username          user name
  email             e-mail address
  weak              number of records whose password strength is in the weak category
  medium            number of records whose password strength is in the medium category
  strong            number of records whose password strength is in the strong category
  reused            number of reused passwords
  unique            number of unique passwords
  securityScore     security score
  twoFactorChannel  2FA - ON/OFF

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

--syntax-help display description of each column in the report

-n, --node name(s) or UID(s) of node(s) to filter results by

-b, --breachwatch display a Breachwatch security report (Commander v16.5.5+)

-su, --show-updated calculate current security audit scores for each vault and display locally (preview)

-s, --save similar to --show-updated above, but with an additional push of the updated scores to Keeper

-st, --score-type <{strong_passwords,default}> define how users' security scores are calculated (note: setting --score-type=strong_passwords will result in the report's summary security scores to be based solely on the number of each user's passwords deemed to be strong, identical to the value calculated and shown in the corresponding vault's Security Audit page)

For backward-compatibility, this command does not calculate updated security scores (based on recent vault changes) by default. Subsequently, it also does not save updated security scores by default.

To change the above behavior, include either the --show-updated flag or the --save flag in your command call (note: if the latter flag is included, there is no need to also include the former).

Examples:

security-audit-report
security-audit-report --format json --output security_score.json
security-audit-repor -b
security-audit-report -su
security-audit-report -s
sar -n Node1
security-audit-report --score-type=strong_passwords --save
  1. Show security audit report - password strength for each user in the enterprise

  2. Create a security audit report and output it in json format to a file named "security_score.json"

  3. Show a Breachwatch security report

  4. Recalculate security audit scores, then show security audit report. Do not push changes to Keeper.

  5. Recalculate security audit scores, show security audit report, and push changes to Keeper.

  6. Show security audit report (using the command abbreviation), limiting the results to accounts assigned to enterprise node Node1

  7. Show security audit scores based on the number of each user's passwords deemed to be strong and push the results to Keeper

breachwatch report command

Command: breachwatch report or bw report

Details: Run a Breachwatch security report for all users in your enterprise

Switches:

--format <{table,csv}> format of the report

--output <FILENAME> output to the given filename

Examples:

breachwatch report
bw report --format csv --output breachwatch_report.csv
  1. Show Breachwatch-specific security scores

  2. Export a Breachwatch security report to a CSV file breachwatch_report.csv located in Commander's current working directory (run the command v -v to get this folder's full path)

share-report command

Command: share-report

Details: Show a report of shared records (shared both with and by the caller)

Parameters:

Path or UID of folder containing the records to be shown in report. Optional (w/ multiple values allowed).

Switches:

--format <{table,csv}> format of the report

--output <FILENAME> output to the given filename

-r, --record <RECORD NAME OR UID> identify a specific record to show report for

-e, --email <USER'S EMAIL OR TEAM NAME> identify user or team to show shared record report for

-o, --owner display record's owner

--share-date display date when record was shared. Only used with owner report ( --owner switch). Only available to users with permission to execute reports for their company

-sf, --shared-folders display display shared folder detail information. If omitted then records.

-v, --verbose show record UID with report

-f, --folders limit report to shared folders (excludes shared records)

-tu, --show-team-users expand team-share info to include individual members for each shared folder or record in the report

Examples:

share-report
share-report --record 5R7Ued8#JctulYbBLwM$
share-report --format csv --output share_report.csv
share-report -e john.doe@gmail.com --owner --share-date -v
share-report -o -v Team1_Folder Za2aspMQG9De5In28sc3KA
share-report --folders
  1. Display shared records report

  2. Display share report for the record with the given UID

  3. Output a shared records report in csv format

  4. Display a report of records shared with "john.doe@gmail.com" and show the original owner, as well as when it was shared

  5. Show a report that includes ownership information and detailed share permissions for each shared record contained within the folder named "Team1_Folder" and the folder w/ UID = Za2aspMQG9De5In28sc3KA

  6. Display a list of shared-folders in the current user's vault, along with the teams and other users who have access to each shared-folder

shared-records-report command

Command: shared-records-report or srr

Details: Display information about shared records (Note: the displayed information is limited to records shared by the caller with other users, i.e., this excludes records shared with the caller by other users. To include both types of records, see share-report command)

Parameters:

Path or UID of folder containing the records to be shown in report. Optional (w/ multiple values allowed).

Switches:

--format <{json,csv,table}> format of the report

--output <FILENAME> output to filename provided. Ignored for "table" format

--all-records include all records into report. Only owned records are reported by default

--show-team-users or -tu show members of team for records shared via share team folders

Examples:

shared-records-report
shared-records-report --format csv
shared-records-report --show-team-users 
srr --format json -o shared_records.json 'My Shares' Team1/Shares _qsA0AA0XwJkeTVQdijmEg
  1. Display information about shared records in table format

  2. Display information about shared records in csv format

  3. Display information about records shared with individual members of all teams with whom those records are shared (via shared-folders)

  4. Export information about shared records found in folders "My Shares", "Team1/Shares" (folders identified by their paths) and "_qsA0AA0XwJkeTVQdijmEg" (folder identified by its UID) to a JSON-formatted file named shared_records.json. In this case -- because containing folders are explicitly provided in the command call -- the records included in the resulting report will be limited to only those found within the specified folders (either as a child of the folder, or as a child of a subfolder, etc.)

external-shares-report command

Command: external-shares-report

Details: Display folders and records shared with users that are outside of the enterprise. Optionally, delete all external shares.

Switches:

--format {table,json,csv} output format

--output FILENAME output to filename. Ignored when using table format

-a, --action {remove,none} action to apply to external shares; use 'remove' to delete the shares with confirmation, 'none' if omitted

-t, --share-type {direct,shared-folder,all} show only individually shared records (direct) or shared folders, 'all' if omitted

-f, --force apply action w/o confirmation

-r, --refresh-data Sync and fetch latest data before running report

Examples:

external-shares-report
external-shares-report --format csv --output share_report.csv
external-shares-report -a remove
external-shares-report -a remove -t direct
external-shares-report -a remove -t shared-folder
external-shares-report -a remove -f 
external-shares-report -r
  1. Display all externally shared folders and records

  2. Output all externally shared folders and records as share_report.csv

  3. Remove all externally shared folders and records (with confirmation prompt)

  4. Remove all externally shared individual records, but not shared folders

  5. Remove all externally shared folders, but not individual records

  6. Remove all externally shared folders and records without confirmation prompt

  7. Sync with Keeper to fetch latest data, then generate the share report

msp-legacy-report command

Command: msp-legacy-report

Details: Display information about available managed company licenses

Switches:

--format <{json. table, csv}> format of the report

--range <{today, yesterday, last_7_days, last_30_days, month_to_date, last_month, year_to_date, last_year}> timeframe of license usage

--from <FROM DATE> start date of time range to display license usage. Use with audit type ( --type audit ), and without --range flag

format: YYYY-mm-dd ex. 2021-07-08

--to <TO DATE> end date of time range to display license usage. Use with audit type ( --type audit ), and without --range flag

format: YYYY-mm-dd ex. 2021-07-08

--output <FILENAME> file to output the report to

Examples:

msp-legacy-report
msp-legacy-report --range last_30_days
msp-legacy-report --from 2021-02-01 --to 2021-03-01
msp-legacy-report --format csv --output licenses.csv
  1. Show a report of the currently allocated and remaining company licenses

  2. Show a report of licenses usage over the last 30 days

  3. Show a report of licenses usage from the first of February to the first of March 2021

  4. Output a report of current licenses usage to a file named "licenses.csv" in csv format

aging-report command

This advanced report requires the Advanced Reporting & Alert module, and Compliance module. For more information see the dedicated Compliance Reports Page

Command: aging-report

Details: Determine which record passwords have NOT been changed in a specific amount of time. This report takes advantage of the advanced reporting capabilities of the Enterprise platform, as well as the compliance data which is able to decrypt record title for privileged admins.

Switches:

-r, --rebuild rebuild the record database

--delete deletes the local database cache containing encrypted compliance record data

-nc, --no-cache remove any local non-memory storage of data upon command completion

-s, --sort <{owner, title, last_changed}> sort output by chosen field

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

--period <TIME PERIOD> look for records that have a password that hasn't changed in this period

--username <USERNAME> report expired passwords for the given user

--exclude-deleted omit deleted records from the report (note: adding this flag may result in the need for additional data to be retrieved and subsequent longer command execution times)

--in-shared-folder Limit report to records in shared folders

Examples:

aging-report --rebuild
aging-report --period=1y
aging-report --period=3m --format=table --username user@company.com
aging-report --exclude-deleted
aging-report --in-shared-folder
  1. Rebuild the audit and compliance data locally

  2. Generate password aging report for passwords not updated in over 1 year period

  3. Generate password aging report for a specific user's passwords not updated in over 3 month period

  4. Generate password aging report that ignores any records currently in each vault's trash bin

  5. List all shared-folder records whose passwords have not been changed in the last 3 months

action-report command

Requires ARAM add-on

Command: action-report

Details: Generate a report and/or take action on users in a particular status or that have not performed an action within in a given time period. The default timeframe is the last 30 days unless otherwise specified using --days-since.

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> if you want to save the results to a file

-d <NUMBER OF DAYS> or --days-since <NUMBER OF DAYS> look back this many days for targeted action (default value of 30 days is used if omitted)

-t or --target <no-logon, no-update, locked, invited, no-security-question-update, blocked> user status that you want to report or act on

--columns comma-delimited list of columns to show on report. Supported columns: {2fa_enabled, team_count, status, transfer_status, alias, role_count, node, teams, name, roles}

-a or --apply-action <lock, delete, transfer, none> used after target, the action to take on each user account that matches the target user status

--target-user (used with transfer action above) the destination account to which vaults are transferred

-n or --dry-run show accounts that fall into the action-status category specified via the --target/-t switch and the corresponding administrative action to be applied (specified via the -a/--apply-action switch) without actually applying the action

-f or --force perform the administrative action specified via the -a/--apply-action option without being prompted for confirmation (applies only to "delete" and "transfer" administrative actions)

Example Target Reports:

  1. Show a list of users that haven't logged in within the last 45 days

action-report --target no-logon --days-since 45
  1. Show a list of user accounts that have remained in a locked status for 60+ days

action-report --target locked --days-since 60
  1. Show a list of users who have been in an invited status for 15+ days.

action-report -t invited -d 15
  1. Show a list of users that haven't updated any records in the last 35 days

action-report --target no-update --days-since 35
  1. Show users that have not set/changed their account security questions within the default timeframe (the last 30 days)

action-report -t no-security-question-update
  1. Generate a report of users who have not logged in to their account within the last 30 days (the default timeframe when none is specified) that includes the 2fa status, # of teams assigned, # of roles assigned, assigned node, and full name for each user in the report.

action-report -t=no-logon --columns=2fa_enabled,team_count,role_count,node,name

Example Target with Actions (add -n for dry run):

  1. Delete any users whose accounts have been in invited status for more than 90 days:

action-report   -d 90
  1. Transfer the vaults of all users that have been locked for 90 days to a designated vault.

action-report -t locked -d 90 -a transfer --target-user destination.vault@email.com
  1. Lock all users who haven't logged in within 180 days.

action-report -t no-logon -d 180 -a lock

compliance-report command

Requires Compliance Reporting add-on

For more information see the dedicated Compliance Reports Page

Command: compliance-report

Details: Generate a report of the sharing status of records across the enterprise.

This report relies on a cache which is built the first time the command is called. It may take some time for the first command in a session to complete depending on the size of your enterprise.

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

-u, --username <USERNAME> filter to records of the given user. Use multiple times for multiple users

-n, --node <NODE NAME or ID> filter to records in vaults in the given node

-jt, --job-title <JOB TITLE> filter to records in vaults owned by users with the given job title. Use multiple times for multiple titles

--record <RECORD NAME OR UID> show only the given record. Use multiple times for multiple records

--team <TEAM NAME> show only users in the given team. Use multiple times for multiple teams

--url <URL> show only records with the given URL. Use multiple times for multiple URLs

--shared show only shared records

--deleted-items show only deleted records (not valid with --active-items flag)

--active-items show only active records (not valid with --deleted-items flag)

-r, --rebuild refresh the cached records used for this report

-nr, --no-rebuild prevent stale cache refresh if cache exists (invalid with --rebuild flag)

-nc, --no-cache disable persistent local caching of underlying report data

Examples:

compliance-report
compliance-report -u "user@company.com"
compliance-report --node "Chicago" -jt "Manager"
compliance-report -u "user@company.com" --shared
compliance-report --nr --record="AWS MySQL Administrator"  
compliance-report --active-items
  1. Show the sharing status of all records for all users in the enterprise

  2. show the sharing status of records in the vault of user: "user@company.com"

  3. Show the sharing status of records in vaults owned by managers in Chicago

  4. Show the sharing status of only shared records owned by "user@company.com"

  5. Show the sharing status of record w/ title "AWS MySQL Administrator" based on cached data if present

  6. Show the sharing status of all active records in the enterprise

Compliance Team Report

Requires Compliance Reporting add-on

For more information see the dedicated Compliance Reports Page

Command: compliance team-report

Details: Generate a report of teams with access to each shared-folder containing at least 1 record, along with their corresponding edit and share permissions

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

-n, --node <NODE NAME or ID> filter to records in vaults in the given node

-r, --rebuild refresh the cached records used for this report

-nr, --no-rebuild prevent stale cache refresh if cache exists (invalid with --rebuild flag)

-nc, --no-cache disable persistent local caching of underlying report data

-tu, --show-team-users show individual members of each team in the report

Examples:

compliance team-report
compliance team-report --format csv --output "team-report.csv"
compliance team-report --show-team-users
  1. Show the compliance team report

  2. Save a CSV file output of the compliance team report

  3. Show individual users assigned to each team in the report

Compliance Record-Access Report

Requires Compliance Reporting and ARAM add-ons

For more information see the dedicated Compliance Reports Page

Command: compliance record-access-report or compliance rar

Details: Generate a report showing all records that each specified user has accessed or can currently access

Parameters:

Username(s) or ID(s). Set to "@all" to run the report for all users

Switches:

--format <{table,json,csv}> format of the report

--output <FILENAME> output to the given filename

-n, --node <NODE NAME or ID> filter to records in vaults in the given node

-r, --rebuild refresh the cached records used for this report

-nr, --no-rebuild prevent stale cache refresh if cache exists (invalid with --rebuild flag)

-nc, --no-cache disable persistent local caching of underlying report data

--report-type select type of record-access data to include in report (Optional. Defaults to "history" if omitted). Set to "vault" to view currently-accessible records, "history" to view only records previously-accessed

--aging include aging data (e.g., last modified, created) for each record in the report

Examples:

compliance record-access-report user@company.com
compliance rar --report-type vault --format csv --output "all_vaults.csv" @all
compliance rar --report-type vault --aging user1@company.com user2@company.com
  1. Show list of all records that user@company.com has ever accessed

  2. Export a CSV file listing all records currently in each user's vault for all users

  3. Generate report of all records currently accessible by user1@company.com and user2@company.com, showing when each record was created, last modified, and last rotated

Event Logging to SIEM

Commander supports integration with popular SIEM solutions such as Splunk, Sumo and general Syslog format. For more general reporting of events, we recommend using the audit-report command. For pushes of event data into on-prem SIEM, the audit-log command is a good choice because it automatically tracks the last event exported and only sends incremental updates. The list of over 100 event types is documented in our Enterprise Guide:

https://docs.keeper.io/enterprise-guide/event-reporting

Using Commander for SIEM integration works well in an on-prem environment where the HTTP event collector is only available within your network. The Keeper Admin Console version 13.3+ is capable of integrating our backend event data into your SIEM solution but it requires that you are utilizing a cloud-based SIEM solution. If you need assistance in integrating Keeper into your SIEM solution without Commander, please contact our business support team at business.support@keepersecurity.com.

Export of Event Logs in Syslog Format

Commander can export all event logs to a local file in syslog format, or export data in incremental files. A Keeper record in your vault is used to store a reference to the last event

$ keeper shell

To export all events and start tracking the last event time exported:

My Vault> audit-log --target=syslog
Do you want to create a Keeper record to store audit log settings? [y/n]: y
Choose the title for audit log record [Default: Audit Log: Syslog]: 
Enter filename for syslog messages.
...              Syslog file name: all_events.log
...          Gzip messages? (y/N): n
Exported 3952 audit events
My Vault>

This creates a record in your vault (titled "Audit Log: Syslog" in this example) which tracks the timestamp of the last exported event and the output filename. Then the event data is exported to the file in either text or gzip format.

Each subsequent audit log export can be performed with this command:

$ keeper audit-log --format=syslog --record=<your record UID>

or from the shell:

My Vault> audit-log --target=syslog --record=<your record UID>

To automate the syslog event export every 5 minutes, create a JSON configuration file such as this:

{
    "server":"https://keepersecurity.com",
    "user":"user@company.com",
    "password":"your_password_here",
    "mfa_token":"filled_in_by_commander",
    "mfa_type":"device_token",
    "debug":false,
    "plugins":[],
    "commands":["sync-down","audit-log --target=syslog"],
    "timedelay":600
}

Then run Commander using the config parameter. For example:

$ keeper --config=my_config_file.json

Splunk HTTP Event Collector Push

Keeper can post event logs directly to your on-prem or cloud Splunk instance. Please follow the below steps:

  • Login to Splunk enterprise

  • Go to Settings -> Data Inputs -> HTTP Event Collector

  • Click on "New Token" then type in a name, select an index and finish.

  • At the last step, copy the "Token Value" and save it for the next step.

  • Login to Keeper Commander shell

$ keeper shell

Next set up the Splunk integration with Commander. Commander will create a record in your vault that stores the provided token and Splunk HTTP Event Collector. This will be used to also track the last event captured so that subsequent execution will pick up where it left off. Note that the default port for HEC is 8088.

$ keeper audit-log --format=splunk

Do you want to create a Keeper record to store audit log settings? [y/n]: y
Choose the title for audit log record [Default: Audit Log: Splunk]: <enter> 

Enter HTTP Event Collector (HEC) endpoint in format [host:port].
Example: splunk.company.com:8088
...           Splunk HEC endpoint: 192.168.51.41:8088
Testing 'https://192.168.51.41:8088/services/collector' ...Found.
...                  Splunk Token: e2449233-4hfe-4449-912c-4923kjf599de

You can find the record UID of the Splunk record for subsequent audit log exports:

My Vault> search splunk

  #  Record UID              Title              Login    URL
---  ----------------------  -----------------  -------  -----
  1  schQd2fOWwNchuSsDEXfEg  Audit Log: Splunk

Each subsequent audit log export can be performed with this command:

$ keeper audit-log --format=splunk --record=<your record UID>

or from the shell:

My Vault> audit-log --target=splunk --record=<your record UID>

To automate the push of Splunk events every 5 minutes, create a JSON configuration file such as this:

{
    "server":"https://keepersecurity.com",
    "user":"user@company.com",
    "password":"your_password_here",
    "mfa_token":"filled_in_by_commander",
    "mfa_type":"device_token",
    "debug":false,
    "plugins":[],
    "commands":["sync-down","audit-log --target=splunk"],
    "timedelay":600
}

Then run Commander using the config parameter. For example:

$ keeper --config=my_config_file.json

Sumo Logic HTTP Event Collector Push

Keeper can post event logs directly to your Sumo Logic account. Please follow the below steps:

  • Login to Sumo Logic

  • Go to Manage Data -> Collection

  • Click on Add Collector -> Hosted Collector then Add Source -> HTTP Logs & Metrics

  • Name the collector and Save. Any other fields are default.

  • Note the HTTP Source Address which is the collector URL

  • Login to Keeper Commander shell

$ keeper shell

Next set up the Sumo Logic integration with Commander. Commander will create a record in your vault that stores the HTTP Collector information. This will be used to also track the last event captured so that subsequent execution will pick up where it left off.

$ keeper audit-log --format=sumo

When asked for “HTTP Collector URL:” paste the URL captured from the Sumo interface above.

After this step, there will be a record in your vault used for tracking the event data integration. You can find the record UID of the Sumo record for subsequent audit log exports:

My Vault> search sumo

  #  Record UID              Title              Login    URL
---  ----------------------  -----------------  -------  -----
  1  schQd2fOWwNchuSsDEXfEg  Audit Log: Sumo

Each subsequent audit log export can be performed with this command:

$ keeper audit-log --format=sumo --record=<your record UID>

or from the shell:

My Vault> audit-log --target=sumo --record=<your record UID>

To automate the push of Sumo Logic events every 5 minutes, create a JSON configuration file such as this:

{
    "server":"https://keepersecurity.com",
    "user":"user@company.com",
    "password":"your_password_here",
    "mfa_token":"filled_in_by_commander",
    "mfa_type":"device_token",
    "debug":false,
    "plugins":[],
    "commands":["sync-down","audit-log --target=sumo"],
    "timedelay":600
}

Then run Commander using the config parameter. For example:

$ keeper --config=my_config_file.json

Export of Event Logs in JSON Format

Commander can export all event logs to a local file in JSON format. The local file is overwritten with every run of Commander. This kind of export can be used with conjunction with other application that process the file. A Keeper record in your vault is used to store a reference to the last event.

$ keeper shell

To export all events and start tracking the last event time exported:

My Vault> audit-log --target=json
Do you want to create a Keeper record to store audit log settings? [y/n]: y
Choose the title for audit log record [Default: Audit Log: JSON]:
JSON file name: all_events.json
Exported 3952 audit events
My Vault>

This creates a record in your vault (titled "Audit Log: JSON" in this example) which tracks the timestamp of the last exported event and the output filename. Then the event data is exported to the file.

Each subsequent audit log export can be performed with this command:

$ keeper audit-log --format=json --record=<your record UID>

or from the shell:

My Vault> audit-log --target=json --record=<your record UID>

To automate the JSON event export every 5 minutes, create a JSON configuration file such as this:

{
    "server":"https://keepersecurity.com",
    "user":"user@company.com",
    "password":"your_password_here",
    "mfa_token":"filled_in_by_commander",
    "mfa_type":"device_token",
    "debug":false,
    "plugins":[],
    "commands":["sync-down","audit-log --target=json"],
    "timedelay":600
}

Then run Commander using the config parameter. For example:

$ keeper --config=my_config_file.json

Azure Log Analytics

Keeper can post event logs directly to your Azure Log Analytics workspace. Please follow the below steps:

  • Login to Azure Portal and open Log Analytics workspace

  • Go to Settings -> Advanced settings

  • Note the Workspace ID and Primary or Secondary key

  • Login to Keeper Commander shell

$ keeper shell

Next set up the Log Analytics integration with Commander. Commander will create a record in your vault that stores the Log Analytics access information. This will be used to also track the last event captured so that subsequent execution will pick up where it left off.

$ keeper audit-log --format=azure-la

When asked for “Workspace ID:” paste Workspace ID captured from the Advanced settings interface above. When asked for “Key:” paste Primary or Secondary key captured from the Advanced settings interface above.

After this step, there will be a record in your vault used for tracking the event data integration. You can find the record UID of the Log Analytics record for subsequent audit log exports:

My Vault> search analytics

  #  Record UID              Title                           Login                                 URL
---  ----------------------  ------------------------------  ------------------------------------  -----
  1  schQd2fOWwNchuSsDEXfEg  Audit Log: Azure Log Analytics  <WORKSPACE GUID>

Each subsequent audit log export can be performed with this command:

$ keeper audit-log --format=azure-la --record=<your record UID>

or from the shell:

My Vault> audit-log --target=azure-la --record=<your record UID>

To automate the push of events to Azure Log Analytics every 5 minutes, create a JSON configuration file such as this:

{
    "server":"https://keepersecurity.com",
    "user":"user@company.com",
    "password":"your_password_here",
    "mfa_token":"filled_in_by_commander",
    "mfa_type":"device_token",
    "debug":false,
    "plugins":[],
    "commands":["sync-down","audit-log --target=azure-la"],
    "timedelay":600
}

Then run Commander using the config parameter. For example:

$ keeper --config=my_config_file.json

Last updated