Use IBM Cloud Hyper Protect Crypto Services to offload NGINX TLS

SSL offloading is the process of removing the SSL-based encryption from incoming traffic that a web server receives, to relieve it from decryption of data. SSL traffic can be compute intensive because it requires encryption and decryption of traffic. SSL, also referred to as Transport Layer Security (TLS), encrypts communications between the client and the server to protect against potential hackers and man-in-the-middle attacks.

SSL offloading relieves a web server of the processing burden of encrypting and decrypting SSL traffic. Offloading SSL to a separate server helps with the following tasks:

  • inspecting client requests for dangerous content that could compromise the security of web servers
  • validating the identity of clients before any access is allowed to web resources
  • obfuscating URLs and fixing issues related to publishing applications with hard-coded elements
  • preventing the transfer of specific types of content based on patterns such as file extensions
  • redirecting traffic based on content type, such as sending all image requests to a server that’s optimized for serving images
  • caching web content on the load balancer, thus removing the need to re-request frequently accessed content from the web server
  • re-encrypting traffic going to the servers for additional security

For more background, see self-signed certificate, certificate signing request, and CA certificates.

Learning objectives

This tutorial explains how to use a Docker container (nginx-tls-offload) to perform SSL offloading on an NGINX web server using private keys protected by the HSM for the IBM Cloud Hyper Protect Crypto Services instance.

SSL Offload on NGINX with Hyper Protect Crypto Services

Prerequisites

  1. Set up an IBM Cloud account, if you don’t already have one.
  2. Provision and initialize an IBM Cloud Hyper Protect Crypto Services instance.
    • Make a note of the EP11 end points for your instance.
    • Create and make a note of the API key that is to be used to access the instance.
  3. Install curl (if you don’t have it) using apt update && apt install -y curl.

Estimated time

If all of the prerequisites are in place, it should take you no more than 60 minutes to complete this tutorial.

Steps

Here are the steps for completing this tutorial:

  1. Configuration
  2. Run your configuration
  3. Troubleshooting

Step 1. Configuration

  1. Create a working directory: ./nginx-ssl-offload
  2. Create the nginx-environment /nginx-ssl-offload/nginx-env.txt with the following entries:
     env LIBGREP11_CONNECTION_ADDRESS;
     env LIBGREP11_CONNECTION_PORT;
     env LIBGREP11_CONNECTION_TLS_CACERT;
     env LIBGREP11_IAMAUTH_ADDRESS;
     env LIBGREP11_IAMAUTH_INSTANCEID;
     env LIBGREP11_IAMAUTH_APIKEY;
     env LIBGREP11_IAMAUTH_TLS_CACERT;
    
  3. Copy the sample NGINX-SSL File to nginx-ssl-offload/nginx-ssleng.conf.
    • Make sure the paths mentioned for ssl_certificate keywords are correct.
    • Enable access-log as required.
    • Double-check location.
  4. Copy the sample OpenSSL configuration file to nginx-ssl-offload/openssl.cnf.
    • Add new_oids as required.
    • Change any other defaults according to your use case.
  5. Copy the patch for NGINX with OpenSSL to nginx-ssl-offload/openssl.fixnginxinit.patch.
  6. Create a sample landing page that confirms SSL-Offload is working here nginx-ssl-offload/ssleng.index.html.
    • If this page is displayed, it means the SSL-Offload function is working as expected.
  7. Acquire the DEB package that will be used for this tutorial and put it in here [nginx-ssl-offload/grep11\\.deb].

  8. Create the script ./nginx-ssl-offload/start.sh):

    #!/bin/bash
    
    cd /etc/nginx/cert
    openssl ecparam -engine grep11 -name prime256v1 -out prime256v1-param.pem
    openssl req -engine grep11 -x509 -sha256 -nodes -days 3650 -subj '/CN=localhost/' -newkey EC:prime256v1-param.pem -keyout nginx-server-prikey-prime256v1-my.pem -out nginx-server-cert-prime256v1.pem
    
    nginx -g 'daemon off;'
    
  9. Copy the sample Dockerfile to ./nginx-ssl-offload/Dockerfile.
  10. Build the Docker image:
    cd ./nginx-ssl-offload
    docker build -t nginx-tls-offload:latest .
    

Step 2. Run your configuration

  1. Run the Docker container:
    docker run -d \
       -p 2080:2080 \
       -e LIBGREP11_CONNECTION_ADDRESS="<Your-HPCS-Instance-EP11-Endpoint-URL>" \
       -e LIBGREP11_CONNECTION_PORT="<Your-HPCS-Instance-EP11-Endpoint-Port>" \
       -e LIBGREP11_IAMAUTH_INSTANCEID="<Your-HPCS-instance-ID>" \
       -e LIBGREP11_IAMAUTH_APIKEY="<Your-API-Key>" \
       -e LIBGREP11_CONNECTION_TLS_CACERT=/etc/ssl/certs/ca-certificates.crt \
       -e LIBGREP11_IAMAUTH_TLS_CACERT=/etc/ssl/certs/ca-certificates.crt \
       --name <nginxName> nginx-tls-offload:latest
    
  2. Test if the Docker container is performing SSL offloading as expected by using the following command:
    curl -k https://localhost:2080
    
    If the nginx-tls-offload container is working as expected, you should see the following response:
    Welcome to openssl engine & grep11 service!
    If you see this page, the openssl engine and grep11 service were successfully installed and working.
    

You have successfully offloaded your TLS workloads on an NGINX load balancer using keys managed by IBM Cloud Hyper Protect Crypto Services.

Step 3. Troubleshooting

If anything goes wrong, do the following:

  1. Stop the Docker container: docker rm -f \<nginxName>.
  2. Delete the Docker container: docker rmi nginx-tls-offload:latest.
  3. Repeat the previous steps to rebuild the Docker image and run the Docker container.

Summary

Offloading SSL to a load balancer such as NGINX allows for a single, centralized point of control and management. Certificates and private keys only need to be managed in one place rather than on multiple servers. Policies can be applied and managed in one place. This greatly simplifies the administration overhead and also allows for separation of the security role from the application owner role.

You can try the technique described here with other load balancers, web application firewalls, caching servers, etc. You can also create machine learning algorithms that can benefit from inspecting the content that is dropped to create better algorithms that learn-as-you-go to ensure the safety of your web-applications environment.