Tutorial

Use IBM Cloud Hyper Protect Crypto Services to offload NGINX TLS

Offloading TLS to a load balancer like NGINX allows for a single, centralized point of control and management

By

Sandeep Batta,

Heng Wang

Transport Layer Security (TLS) encrypts communications between the client and the server to protect against potential hackers and man-in-the-middle attacks. "TLS offloading" is the process of using a Hardware Security Module (HSM) to perform the TLS encryption and decryption instead of letting / trusting web server to do so, which significantly reduces the risk of key compromise. TLS is sometimes incorrectly referred to as SSL, which is a deprecated protocol that performed the same function as TLS.

"TLS offloading" relieves a web server of the processing burden of encrypting and decrypting traffic. Offloading TLS to a separate server helps with the following tasks:

  • Inspecting client requests for dangerous content that could compromise the security of web servers
  • Validating the identity of clients before any access is allowed to web resources
  • Obfuscating URLs and fixing issues related to publishing applications with hard-coded elements
  • Preventing the transfer of specific types of content based on patterns such as file extensions
  • Redirecting traffic based on content type, such as sending all image requests to a server that's optimized for serving images
  • Caching web content on the load balancer, thus removing the need to re-request frequently accessed content from the web server
  • Re-encrypting traffic going to the servers for additional security

For more background, see self-signed certificate, certificate signing request (CSR), and CA certificates.

Learning objectives

This tutorial explains how to use a NGINX container (nginx-tls-offload) to perform TLS offloading using private encryption keys protected by a FIPS 140-2 Level 4 HSM, provided on IBM Cloud by Hyper Protect Crypto Services which runs on IBM LinuxONE

TLS Offload on NGINX with Hyper Protect Crypto Services

Note: Similar configuration is possible with IBM Hyper Protect Services on a LinuxONE in a customer on-premise environment as well.

Estimated time

If all of the prerequisites are in place, it should take you no more than 60 minutes to complete this tutorial.

Prerequisites

  1. Set up an IBM Cloud Pay-As-You-Go account, if you don't have one already.
  2. Provision an instance of IBM Cloud Hyper Protect Crypto Service
  3. After the instance is provisioned, Copy information from the Overview tab for the following variables to be used later in this tutorial:

    • YOUR-HPCS-INSTANCE-EP11-ENDPOINT-URL
    • YOUR-HPCS-INSTANCE-EP11-ENDPOINT-PORT
    • YOUR-HPCS-INSTANCE-ID
  4. Get setup with the Pre-requisites to initialize your HPCS instance.

  5. Perform the Key Ceremony by carefully following the procedure outlined in Initialize your HPCS instance.
  6. Create an API key to access your HPCS instance and Copy the API-Key for the YOUR-IBMCLOUD-API-KEY variable that you will use later in this tutorial.
  7. Install curl and wget. If you are on an Ubuntu based Hyper Protect Virtual Server, use the following commands:

    • apt update && apt install -y curl
    • apt-get install wget

Steps

Here are the steps for completing this tutorial:

Step 1. Configuration

  1. Logon to your Linux instance. You can provision a Virtual Server Instance (VSI) in IBM Cloud Virtual Private Cloud (VPC) for this tutorial.
  2. Create a working directory: ./nginx-tls-offload.
  3. Download the TAR file for your specific architecture:

    amd64

     wget -O nginx-tlsoffload.tar.gz https://ibm.box.com/s/bwjcid8jhu7gqybv4mqi5nhp98i7dllz
    

    s390x

     wget -O nginx-tlsoffload.tar.gz https://ibm.box.com/s/wnz8hsg1gnr686rzmrd16me85qk53cqu
    
  4. Untar the file:

     mkdir nginx-tls-offload
     tar -xvzf nginx-tlsoffload.tar.gz -C nginx-tls-offload --strip-components 1
    
  5. Build the Docker image:

     cd ./nginx-tls-offload
     docker build -t nginx-tls-offload:latest .
    

Step 2. Run your configuration

  1. Run the Docker container:

     docker run -d -p 2080:2080 -e LIBGREP11_CONNECTION_ADDRESS="<YOUR-HPCS-INSTANCE-EP11-ENDPOINT-URL>" -e LIBGREP11_CONNECTION_PORT="<YOUR-HPCS-INSTANCE-EP11-ENDPOINT-PORT>" -e LIBGREP11_IAMAUTH_INSTANCEID="<YOUR-HPCS-INSTANCE-ID>" -e LIBGREP11_IAMAUTH_APIKEY="<YOUR-IBMCLOUD-API-KEY>" -e LIBGREP11_CONNECTION_TLS_CACERT=/etc/ssl/certs/ca-certificates.crt -e LIBGREP11_IAMAUTH_TLS_CACERT=/etc/ssl/certs/ca-certificates.crt --name nginx-tlsoffload-container nginx-tls-offload:latest
    
  2. Test if the Docker container is performing TLS offloading as expected by using the following command:

     curl -k https://localhost:2080
    

    If the nginx-tls-offload container is working as expected, you should see the following response:

     Welcome to openssl engine & grep11 service!
     If you see this page, the openssl engine and grep11 service were successfully installed and working.
    

You have successfully offloaded your TLS workloads on an NGINX load balancer using keys managed by IBM Cloud Hyper Protect Crypto Services.

Step 3. Troubleshooting

If anything goes wrong, do the following:

  1. Stop the Docker container: docker rm -f nginx-tlsoffload-container
  2. Delete the Docker container: docker rmi nginx-tls-offload:latest
  3. Repeat Step 2 to rebuild the Docker image and run the Docker container.

Summary

Offloading TLS to a load balancer such as NGINX allows for a single, centralized point of control and management. Certificates and private keys only need to be managed in one place rather than on multiple servers. Policies can be applied and managed in one place. This greatly simplifies the administration overhead and also allows for separation of the security role from the application owner role.

You can try the technique described here with other load balancers, web application firewalls, caching servers, etc. You can also create machine learning algorithms that can benefit from inspecting the content that is dropped to create better algorithms that learn-as-you-go to ensure the safety of your web-applications environment.

Acknowledgements

We would like to thank Luis Carlos Silva for his contributions to the original tutorial.