Keeper Connection Manager security and encryption model
Keeper utilizes best-in-class security with a zero-trust framework and zero-knowledge security architecture to safeguard your infrastructure and mitigate the risk of a data breach.
Keeper Security, Inc. (KSI) is passionate about protecting its customer's information and infrastructure with Keeper desktop and mobile security software. Millions of consumers and businesses trust Keeper to secure and access remote systems, passwords and private information. Keeper's software is constantly improved and updated to provide our customers with the latest in technology and protection. This page provides an overview of Keeper's security architecture and encryption methodologies.
The Keeper Connection Manager Gateway is a platform that is fully hosted by the customer in any cloud, on-prem or virtual environment. This documentation is focused on the Advanced Linux Install method.
The engineering team at Keeper Security that built Keeper Connection Manager (formerly Glyptodon) are the inventors and primary maintainers of the open source Apache Guacamole project. Keeper Security is proud to support the open source community and millions of users who use the Apache Guacamole remote desktop software.
The packages provided by Keeper Connection Manager have been designed to follow best practices with respect to security, particularly with respect to the Principle of Least Privilege. This is accomplished through careful delegation of rights through users and groups which are automatically created by the Keeper Connection Manager packages, and through strict file permissions.
If you deployed Keeper Connection Manager using the Advanced Linux Install method, we recommend using a reverse proxy like Apache or NGINX for SSL termination. We provide documentation for configuring either of these reverse proxies:
In addition, as the user-mapping.xml authentication mechanism is meant only as a quick means of testing Guacamole (it is not supported for production deployments), customers need to migrate to a production-ready authentication mechanism. All authentication methods packaged within Keeper Connection Manager and which are not user-mapping.xml
are production-ready:
If you wish to enable multi-factor authentication in front of Keeper Connection Manager, you may do so with Duo or TOTP (the standard supported by Google Authenticator and similar apps). Multi-factor authentication is supported in front of any of the above production-ready authentication mechanisms:
In addition to the above authentication methods, Keeper Connection Manager supports the use of Client Certificates to lock down access to specific machines that are managed by the Enterprise.
The Keeper Connection Manager packages create the following users and groups in order to limit the access of services within the Guacamole stack:
The "guacamole
" group - owns all files which the Guacamole web application should be able to read.
The "guacd
" group - owns all files which the guacd service should be able to read.
The "guacd
" user - the sole member of the "guacd" group, and the user which runs the guacd service.
When installing Keeper Connection Manager using the Advanced Linux Install method with Tomcat, you will need to ensure the "tomcat
" user is a member of the "guacamole
" group. If this is not done, the Guacamole web application will not be able to read its own configuration files, and web application startup will fail:
The "guacd" user and group are intentionally limited in privilege. If you need guacd
to have access to additional files or directories, such as for file transfer or storing session recordings, you will need to set the ownership and permissions of those files appropriately.
The ownership and permissions of sensitive files like guacamole.properties
, user-mapping.xml
, and guacd.conf
have been set such that only the components of the Apache Guacamole stack that should be able to read those files can read those files, and such that no component within the Guacamole stack can write or otherwise modify those files:
Customers who deploy the Advanced Linux Install through package management are provided update packages through Keeper Connection Manager's YUM repository (the “yum” tool will automatically apply updates when the administrator runs the command to do so).
Customers who deploy the Auto Docker Install version can use the built-in update capabilities.
Customers who deploy the Docker Compose Install version can use the Docker update capabilities.
Customer vault records are protected using stringent and tightly monitored internal control practices. Keeper is certified as SOC 2 Type 2 compliant in accordance with the AICPA Service Organization Control framework. SOC 2 certification helps ensure that your vault is kept secure through the implementation of standardized controls as defined in the AICPA Trust Service Principles framework.
Keeper Security is ISO 27001 certified, covering the Keeper Security Information Management System which supports the Keeper Enterprise Platform. Keeper's ISO 27001 certification is scoped to include the management and operation of the digital vault and cloud services, software and application development, and protection of digital assets for the digital vault and cloud services.
Keeper is GDPR compliant and we are committed to ensuring our business processes and products continue to maintain compliance for our customers in the European Union. Click here to learn more about Keeper's GDPR compliance and download data processing agreements.
Keeper software is compliant with global, medical data protection standards covering, without limitation, HIPAA (Health Insurance Portability and Accountability Act) and DPA (Data Protection Act).
Keeper is a SOC2-certified and ISO 27001-certified zero-knowledge security platform that is HIPAA compliant. Strict adherence and controls covering privacy, confidentiality, integrity and availability are maintained. With this security architecture, Keeper cannot decrypt, view or access any information, including ePHI, stored in a user’s Keeper Vault. For the foregoing reasons, Keeper is not a Business Associate as defined in the Health Insurance Portability and Accountability Act (HIPAA), and therefore, is not subject to a Business Associate Agreement.
To learn more about the additional benefits for healthcare providers and health insurance companies, please read our Security Disclosure and visit our Enterprise Guide.
Keeper performs quarterly pen testing with 3rd party experts including NCC Group and Cybertest. In addition, Keeper works with independent security researchers who test against all of our products and systems through our Bugcrowd bug bounty program.
Keeper Security environments are tested daily by TrustedSite to ensure that the Keeper web application and KSI's Cloud Security Vault are secure from known remote exploits, vulnerabilities and denial-of-service attacks. A comprehensive external security scan is conducted monthly on the Keeper websites, Keeper web application, and Keeper Cloud Security Vault by TrustedSite. Keeper staff periodically initiate on-demand external scans.
Keeper Security uses PayPal and Stripe for securely processing credit and debit card payments through the KSI payment website. PayPal and Stripe are PCI-DSS compliant transaction processing solutions. Keeper Security is certified PCI-DSS compliant by McAfee Secure.
The Keeper web client, Android App, Windows Phone App, iPhone/iPad App and browser extensions have been certified Privacy Shield compliant with the U.S. Department of Commerce's EU-U.S. Privacy Shield program, meeting the European Commission's Directive on Data Protection. For more information about the U.S. Department of Commerce U.S. Privacy Shield program, see https://www.privacyshield.gov
Keeper utilizes FIPS 140-2 validated encryption modules to address rigorous government and public sector security requirements. Keeper’s encryption has been certified by the NIST CMVP and validated to the FIPS 140 standard by accredited third party laboratories. Keeper has been issued certificate #3967 under the NIST CMVP.
Keeper is certified by the U.S. Department of Commerce Bureau of Industry and Security under Export Commodity Classification Control Number 5D992, in compliance with Export Administration Regulations (EAR). For more information about EAR: https://www.bis.doc.gov
Keeper is monitored 24x7x365 by a global third-party monitoring network to ensure that our website and Cloud Security Vault are available worldwide. If you have any questions regarding this security disclosure, please contact us.
If you receive an email purporting to be sent from KSI and you are unsure if it is legitimate, it may be a “phishing email” where the sender's email address is forged or “spoofed”. In that case, an email may contain links to a website that looks like KeeperSecurity.com but is not our site. The website may ask you for your Keeper Security master password or try to install unwanted software on your computer in an attempt to steal your personal information or access your computer. Other emails contain links that may redirect you to other potentially dangerous web sites. The message may also include attachments, which typically contain unwanted software called "malware." If you are unsure about an email received in your inbox, you should delete it without clicking any links or opening any attachments. If you wish to report an email purporting to be from KSI that you believe is a forgery or you have other security concerns involving other matters with KSI, please contact us.
Keeper Connection Manager is hosted by the customer. The Keeper website and cloud storage runs on secure Amazon Web Services (AWS) cloud computing infrastructure. The AWS cloud infrastructure which hosts Keeper's system architecture has been certified to meet the following third-party attestations, reports and certifications:
SOC 1 / SSAE 16 / ISAE 3402 (SAS70)
SOC 2
SOC 3
PCI DSS Level 1
ISO 27001
FedRamp
DIACAP
FISMA
ITAC
FIPS 140-2
CSA
MPAA
Keeper Security is committed to the industry best practice of responsible disclosure of potential security issues. We take your security and privacy seriously are committed to protecting our customers’ privacy and personal data. KSI’s mission is to build world’s most secure and innovative security apps, and we believe that bug reports from the worldwide community of security researchers is a valuable component to ensuring the security of KSI’s products and services.
Keeping our users secure is core to our values as an organization. We value the input of good-faith researchers and believe that an ongoing relationship with the cybersecurity community helps us ensure their security and privacy, and makes the Internet a more secure place. This includes encouraging responsible security testing and disclosure of security vulnerabilities.
The Keeper Connection Manager team actively monitors the upstream Apache Guacamole project for newly-disclosed security vulnerabilities, and has procedures in place for releasing security updates outside the normal release cycle. Should a vulnerability be found in Guacamole, the patch for that vulnerability will made be immediately available through the Keeper Connection Manager repository, and can be applied automatically using the upgrade process based on your installation method.
Keeper's Vulnerability Disclosure Policy sets out expectations when working with good-faith researchers, as well as what you can expect from us.
If security testing and reporting is done within the guidelines of this policy, we:
Consider it to be authorized in accordance with Computer Fraud and Abuse Act,
Consider it exempt from DMCA, and will not bring a claim against you for bypassing any security or technology controls,
Consider it legal, and will not pursue or support any legal action related to this program against you,
Will work with you to understand and resolve the issue quickly, and
Will recognize your contributions publicly if you are the first to report the issue and we make a code or configuration change based on the issue.
If at any time you are concerned or uncertain about testing in a way that is consistent with the Guidelines and Scope of this policy, please contact us at security@keepersecurity.com before proceeding.
To encourage good-faith security testing and disclosure of discovered vulnerabilities, we ask that you:
Avoid violating privacy, harming user experience, disrupting production or corporate systems, and/or destroying data,
Perform research only within the scope set out by the Bugcrowd vulnerability disclosure program linked below, and respect systems and activities which are out-of-scope,
Contact us immediately at security@keepersecurity.com if you encounter any user data during testing, and
You give us reasonable time to analyze, confirm and resolve the reported issue before publicly disclosing any vulnerability finding.
Keeper has partnered with Bugcrowd to manage our vulnerability disclosure program.
Please submit reports through [https://bugcrowd.com/keepersecurity].
Keeper Security utilizes best-in-class security with a Zero-Knowledge security architecture and Zero-Trust framework. Additional technical documentation about Keeper's Zero-Knowledge encryption model can be found at the links below:
Secrets Manager Encryption Model
Keeper is SOC 2 Type 2, ISO27001 certified. Customers may request access to our certification reports, 3rd party penetration reports and technical architecture documentation with a signed mutual NDA.
How to manually configure Keeper Connection Manager SSL termination using Apache
SSL termination is the recommended method of encrypting communication between users’ browsers and Guacamole, and involves configuring a reverse proxy like Nginx or Apache to handle strictly the SSL/TLS portion of the conversation with the Tomcat instance hosting Guacamole, handling encrypted HTTP externally while passing unencrypted HTTP to Tomcat internally.
If Apache has been configured for SSL/TLS, there should be a <VirtualHost>
section within this configuration that defines the certificate and private key used by Apache, and which requires Apache to listen on the standard HTTPS port (443). To proxy Guacamole through Apache such that Guacamole communication is encrypted, two additional <Location>
sections will need to be added within this <VirtualHost>
section:
where “HOSTNAME” is the hostname or IP address of the internal Guacamole server.
These <Location>
sections configure proxying of the HTTP and WebSocket protocols respectively. Apache handles the HTTP and WebSocket protocols separately, and thus requires separate configuration for the portion of the web application which uses WebSocket. Both sections are required for Guacamole to work correctly behind Apache, and the mod_proxy_wstunnel module must be installed and enabled.
Of particular importance is the flushpackets=on
option within the ProxyPass directive used for HTTP in front of Guacamole. This option disables buffering of packets sent to/from Guacamole. By default, Apache will buffer communication between itself and the browser, effectively disrupting the stream of events and updates required for remote desktop. Without disabling buffering, the Guacamole connection will at best be slow, and at worst not function at all.
After the above changes have been made, Apache must be reloaded to force rereading of its configuration files:
If you are using SELinux (the default on both CentOS and RHEL), you must also configure SELinux to allow HTTPD implementations like Apache to establish network connections:
If Guacamole is not accessible through Apache after the service has been reloaded, check the Apache logs and/or journalctl to verify that the syntax of your configuration changes is correct. Such errors will result in Apache refusing to reload its configuration, or refusing to start up entirely. If you do not see any errors from Apache, verify that you have configured SELinux to allow Apache to connect to the network and check the SELinux audit logs (/var/log/audit/audit.log
) for AVC denials.
X-Forwarded-For
" from Apache (manual deployment only)This section applies only if you have manually deployed Guacamole under your own version of Tomcat. This will usually only be the case if:
You have chosen to manually deploy Guacamole under your own install of Apache Tomcat or JBoss, rather than use the provided version of Tomcat.
You are maintaining a deployment of Glyptodon Enterprise that was originally installed before the 2.5 release (2021-09-16).
If you deployed Guacamole automatically (the default and recommended method), this has already been configured for you within the bundled version of Tomcat.
For the client address sent by Apache via "X-Forwarded-For
" to be correctly trusted as the true client address, you will need to add a "RemoteIpValve
" entry within /etc/tomcat/server.xml
. If this is not specified, the client address will be logged as the address of the internal proxy, which is not usually desirable.
The easiest way to add the required entry is to copy the example server.xml
file provided with the kcm
package, replacing the old /etc/tomcat/server.xml
:
The example server.xml file defines:
A single HTTP connector listening on port 8080.
A RemoteIpValve
with all settings at their default values.
By default, the RemoteIpValve
will trust "X-Forwarded-For
" from all private networks (10.0.0.0/8
, 172.16.0.0/12
, 192.168.0.0/16
, 169.254.0.0/16
, and both IPv4 and IPv6 localhost). If you need this range to be narrowed, or if you have already made manual edits to server.xml
, you will need to make these changes manually.
If editing server.xml
manually (rather than using the example server.xml
), a <Valve>
which trusts "X-Forwarded-For
" from most common private addresses would be specified as:
This <Valve>
must be added within the relevant <Host>
section. In most cases, the easiest place to add this is simply toward the end of the server.xml
file:
If needed, this can be narrowed by providing your own value for the internalProxies
attribute specifies a regular expression which matches the IP addresses of any proxies whose "X-Forwarded-For
" headers should be trusted. For example, to trust only "X-Forwarded-For
" received from localhost:
Applying the updated Tomcat configuration
Once an appropriate RemoteIpValve
has been specified, Tomcat must be restarted to force rereading of server.xml
:
How to manually configure Keeper Connection Manager SSL termination using NGINX
SSL termination is the recommended method of encrypting communication between users’ browsers and Guacamole, and involves configuring a reverse proxy like Nginx or Apache to handle strictly the SSL/TLS portion of the conversation with the Tomcat instance hosting Guacamole, handling encrypted HTTP externally while passing unencrypted HTTP to Tomcat internally.
If Nginx has been configured for SSL/TLS, there should be a server
section within this configuration that defines the certificate and private key used by Nginx, and which requires Nginx to listen on the standard HTTPS port (443). To proxy Guacamole through Nginx such that Guacamole communication is encrypted, a new location
section will need to be added within this server
section:
where “HOSTNAME” is the hostname or IP address of the internal Guacamole server. This can also be replaced with "http://127.0.0.1:8080" in most cases.
While a typical proxy configuration for Nginx may only specify the proxy_pass and proxy_http_version directives, Guacamole requires additional configuration due to the nature of the application:
proxy_buffering off
disables buffering of packets sent to/from Guacamole. By default, Nginx will buffer communication between itself and the browser, effectively disrupting the stream of events and updates required for remote desktop. Without disabling buffering, the Guacamole connection will at best be slow, and at worst not function at all.
The X-Forwarded-For header must be explicitly set to ensure that the IP addresses logged by Guacamole are correct. Without explicitly adding this header (and configuring Tomcat to trust this header), all connections will appear to come from the NGINX server.
The Upgrade and Connection headers are required parts of the WebSocket protocol. If omitted, WebSocket will not function correctly, and Guacamole will fall back to HTTP streaming, which is less efficient.
After the above changes have been made, NGINX must be reloaded to force rereading of its configuration files:
If you are using SELinux (the default on both CentOS and RHEL), you must also configure SELinux to allow HTTPD implementations like Nginx to establish network connections:
If Guacamole is not accessible through NGINX after the service has been reloaded, check the NGINX logs and/or journalctl to verify that the syntax of your configuration changes is correct. Such errors will result in NGINX refusing to reload its configuration, or refusing to start up entirely. If you do not see any errors from NGINX, verify that you have configured SELinux to allow NGINX to connect to the network and check the SELinux audit logs (/var/log/audit/audit.log
) for AVC denials.
X-Forwarded-For
" from NGINX (manual deployment only)This section applies only if you have manually deployed Guacamole under your own version of Tomcat. This will usually only be the case if:
You have chosen to manually deploy Guacamole under your own install of Apache Tomcat or JBoss, rather than use the provided version of Tomcat.
You are maintaining a deployment of Glyptodon Enterprise that was originally installed before the 2.5 release (2021-09-16).
If you deployed Guacamole automatically (the default and recommended method), this has already been configured for you within the bundled version of Tomcat.
For the client address sent by NGINX via "X-Forwarded-For
" to be correctly trusted as the true client address, you will need to add a "RemoteIpValve
" entry within /etc/tomcat/server.xml
. If this is not specified, the client address will be logged as the address of the internal proxy, which is not usually desirable.
The easiest way to add the required entry is to copy the example server.xml
file provided with the kcm
package, replacing the old /etc/tomcat/server.xml
:
The example server.xml file defines:
A single HTTP connector listening on port 8080.
A RemoteIpValve
with all settings at their default values.
By default, the RemoteIpValve
will trust "X-Forwarded-For
" from all private networks (10.0.0.0/8
, 172.16.0.0/12
, 192.168.0.0/16
, 169.254.0.0/16
, and both IPv4 and IPv6 localhost). If you need this range to be narrowed, or if you have already made manual edits to server.xml
, you will need to make these changes manually.
If editing server.xml
manually (rather than using the example server.xml
), a <Valve>
which trusts "X-Forwarded-For
" from most common private addresses would be specified as:
This <Valve>
must be added within the relevant <Host>
section. In most cases, the easiest place to add this is simply toward the end of the server.xml
file:
If needed, this can be narrowed by providing your own value for the internalProxies
attribute specifies a regular expression which matches the IP addresses of any proxies whose "X-Forwarded-For
" headers should be trusted. For example, to trust only "X-Forwarded-For
" received from localhost:
Applying the updated Tomcat configuration
Once an appropriate RemoteIpValve
has been specified, Tomcat must be restarted to force rereading of server.xml
:
Detailed configuration instructions for SSL Termination with Nginx
Nginx is not directly available within the CentOS or RHEL repositories, but is available within the EPEL repository. The EPEL repository must be enabled first before Nginx can be installed:
Once EPEL has been enabled, Nginx can be installed by installing the "nginx" package. Installing this package will install a version of Nginx that is newer than version 1.3 and thus is explicitly supported by Keeper Connection Manager:
As with other standard CentOS / RHEL packages providing a service, the Nginx service will not be started by default after the "nginx" package is installed. It must be started manually, and then configured to automatically start if the system is rebooted:
By default, Nginx will listen on port 80 and handle only unencrypted HTTP connections, serving the static contents of /usr/share/nginx/html
. The rest of this document will cover reconfiguring Nginx such that it serves only HTTPS (using HTTP only to redirect browsers back to HTTPS), with a placeholder for the minimal additional configuration needed to provide SSL termination. The resulting configuration will be used instead of Nginx' default configuration, and will be split up across two modular files:
/etc/nginx/conf.d/redirect-http.conf
Configures Nginx to respond to all HTTP requests with an HTTP 301 ("Moved Permanently") redirect to the same resource under HTTPS.
/etc/nginx/conf.d/guacamole.conf
Configures Nginx to accept HTTPS connections using a specified certificate and private key. Once SSL configuration is complete, this file will also configure Nginx to proxy HTTPS connections to Guacamole such that HTTP is used only internally (SSL termination).
The main configuration file for Nginx, /etc/nginx/nginx.conf
, contains a server
block which serves the contents of /usr/share/nginx/html
over HTTP:
The contents of this server
block will conflict with the HTTP redirect that will need to be created, and should be commented-out or removed. Comment out every line with a # at the beginning:
With the conflicting server
block removed, create a new file, /etc/nginx/conf.d/redirect-http.conf
, to contain the new server block and redirect. The server
block required is fairly straightforward. Rather than serve static content, it instructs Nginx to issue an HTTP 301 redirect ("Moved Permanently") for all requests, informing the browser that whatever they are requesting can instead be found at the same location using HTTPS:
To apply the new configuration, simply reload Nginx:
To configure Nginx to accept HTTPS connections, you must obtain an SSL certificate for the domain associated with your server. There are two primary ways of doing this:
Let's Encrypt: A non-profit certificate authority that provides automated certificate issuance and renewal. Let's Encrypt certificates are free.
Obtaining a certificate from a certificate authority: Several commercial certificate authorities exist, many of which are also domain registrars. Unlike Let's Encrypt, obtaining a certificate from a commercial certificate authority will usually cost money.
The Let's Encrypt service uses a utility called "certbot" to automatically retrieve a certificate after the Let's Encrypt service has remotely verified that you control your domain. This utility is provided within the CentOS / RHEL repositories and must first be installed:
The Let's Encrypt service will verify that you control your domain be reaching back over the internet, attempting to establish an HTTP connection with your server at the domain provided and reading the contents of the .well-known/
directory within the web root. This directory will be created and populated automatically by certbot when it runs, but the /etc/nginx/conf.d/redirect-http.conf
file created earlier will not allow this directory to be read due to the nature of the HTTPS redirect. The server
block within redirect-http.conf
must first be edited to add a location which functions as an exception to this redirect, allowing Let's Encrypt to remotely access .well-known/
:
To apply the new configuration, reload Nginx:
The certbot tool can then be used to automatically retrieve a certificate for your domain. For example, if your Guacamole server is running at "remote.example.com", you can obtain a certificate from Let's Encrypt by running:
The resulting certificate and private key will then be stored within a subdirectory of /etc/letsencrypt/live/
with the same name as your domain. To apply the certificate and additionally host content via HTTPS, a new configuration file needs to be created: /etc/nginx/conf.d/guacamole.conf
. Similar to the HTTP redirect, this file will contain a server block that defines how Nginx should serve HTTPS connections:
Let's Encrypt certificates are intentionally short-lived, expiring after 90 days. To ensure that a new certificate is retrieved, it is recommended that certbot run daily. This can be achieved by creating a script within /etc/cron.daily
called /etc/cron.daily/certbot
containing the following:
Be sure to grant execute permission on the script so that the cron service will be able to execute it:
The certbot tool will then be automatically invoked at least once per day, renewing the certificate and reloading Nginx as needed.
With Nginx now deployed with a proper SSL certificate, the final step in providing SSL termination for Guacamole is to configure Nginx as a reverse proxy for Guacamole. This process involves adding a location
block within the Nginx server block for HTTPS, in this case the /etc/nginx/conf.d/guacamole.conf
configuration file that was just created. Documentation is provided for the full content and meaning of the required location
block, which should be added in the space noted by the "Location block pointing to Tomcat will go here" placeholder comments above.
Optional NGINX Client Certificate configuration for advanced protection
To implement device-based access security with Keeper Connection Manager, this can be accomplished using NGINX client certificates. A client certificate is installed into the web browser of your user's approved devices, and the server will only accept communication from a device with the client certificate installed.
The steps to activate this advanced level of protection is described in the steps below.
(1) Create a Certificate Authority (CA) Key
Generate a CA Key with a strong auto-generated passphrase. Make sure to store the passphrase in your Keeper vault.
(2) Create a CA Certificate
A certificate is created with the CA Key. When answering the questions, you can leave the Common Name and Email empty. Save the information that you entered for Country, State, Locality, and Organization, because you may need these later when renewing the certificate.
Side Note: to analyze the certificate parameters, you can run the below command.
(3) Create a Client Key
For the end-user devices, a client key must be generated. You can decide if you would like to generate one key for all devices, or each user can generate their own key and request a certificate. The process is up to you. Generate a client key with a strong auto-generated passphrase. Make sure to store the passphrase in your Keeper vault.
(4) Create a CSR
For each Client Key, generate a CSR to create a signed certificate.
(5) Sign the CSR with the CA Key
You'll need to enter the CA passphrase from Step 1 to sign the request.
At this point, you now have a signed Client Certificate (client.crt).
(6) Convert the Client Certificate to PKCS#12
To import the certificate into a web browser, a pfx file in PKCS#12 is typically required. Generate the client.pfx file using the command below. A passphrase will be required. This passphrase will be provided to each of the users who need to install the certificate, so it should be used specifically for this purpose.
(7) Add to NGINX Config
Add the below line to your Keeper Connection Manager NGINX configuration file to block access without a certificate. Make sure to upload the CA certificate to the folder path designated.
In the Location block, add this section to send the user a 403 error if the client cert is not installed:
Make sure that the ca.crt file is located in a folder that NGINX can access.
After updating the configuration, restart NGINX.
(8) Test the configuration
Before installing the client certificate on the user's machine, load up the Keeper Connection Manager login screen to ensure that a 403 error is sent:
(9) Install the Client Certificate
For each end-user client device that will need access to Keeper Connection Manager, you will need to install the client certificate into the user's browser or machine. The installation of client certificates varies by platform.
On Windows
Double-click or right-click the client certificate (client.pfx) from Step 6 and enter the client certificate passphrase.
Restart the browser.
The next time Keeper Connection Manager is loaded, you can approve the certificate.
On Mac OS - Chrome
Import the client.pfx file by double-clicking or loading into the Keychain login Certificates section. In the "Trust" section of the certificate, mark as Always Trust.
Restart the browser and load the Keeper Connection Manager login screen to select the certificate.
On Mac OS - Firefox
Open Firefox > Preferences > search for Certificates and select Your Certificates tab. Click "Import" and select the client.pfx certificate file. Complete the import.
After successful import, the Keeper Connection Manager login screen will load.
After setting up the client certificate configuration, the following errors are common.
If NGINX fails to start, journalctl -xe
might show something like "Permission denied:fopen('/path/to/ca.crt','r'). This might occur if the folder permissions and file permissions are not set properly.
If the folder and file permissions are set properly and you're still receiving this error, the restorecon
command will repair the SELinux security context for the file: