To deploy a highly available application to Docker Datacenter, you need to consider load balancing and service discovery. As the application is scaled horizontally, it needs to be able to first discover the new instance and then equally distribute the load across these new instances. This article covers three potential scenarios for deploying an application using Liberty as the application server.

The three load balancing scenarios demonstrate the following topology alternatives:

NGINX and Interlock


interlock

This example uses NGINX for the load balancing and Interlock, the dynamic Docker event-driven plug-in, for service discovery. In this topology, Interlock listens to the event stream from the Universal Control Plane (UCP) controller for container events, updating the NGINX configuration file accordingly. When requests are sent to NGINX, it distributes the load across all the application containers in the swarm.

Before you start:

To set up the NGINX and Interlock topology:

  1. Set the environment variable for the UCP fully qualified domain name. This information can be found on the homepage of your UCP node.

    export SWARM_HOST=tcp://$UCP_FQDN:2376

  2. Pull and start the initial containers for the application:

    docker-compose up -d

  3. Scale up NGINX to the number of nodes you have in your Docker Datacenter system:

    docker-compose scale nginx=<number-of-nodes>

  4. Scale up the Interlock and Liberty instances:

    docker-compose scale interlock=3 app=3

  5. Send a cURL request to the IP address of each of the hosts that NGINX is running on to confirm that the application is working. You should see the hostname change depending on which Liberty container is serving up the application. The -H flag sets the headers for the curl request. These allow NGINX to determine which server the request should be routed to:

    curl -H 'Host: test.lib' http://<node-ip>/ferret/

The Ferret application displays information about the HTTP request and the server.

Static topology


static

In this topology, we use IBM HTTP Server (IHS) and the WebSphere Application Server plug-in for the load balancing but have a static topology. The service discovery is done by running a configuration script, configure.sh. The configuration script works by pulling out the plug-in configuration from each Liberty container, running a container to merge the collected configuration into a single file, then placing that file into each IHS instance. The script needs to be run each time either an IHS or Liberty container is created or removed.

Before you start:

To set up the static topology:

  1. Use the compose up command to create the initial containers and overlay network:

    docker-compose up -d

  2. Scale up your application to the necessary size. Because IHS is mapped to port 80, you cannot have more IHS instances than you have nodes:

    docker-compose scale app=3 ihs=3

  3. Configure your topology. Every time a container is started or stopped you will need to re-run this script:

    ./configure.sh

  4. Load the application by going to http://<ihs-node-ip>/ferret/ in a web browser.

The Ferret application displays information about the HTTP request and the server.

IBM HTTP server and collectives


collectives

In this example, we use IBM HTTP Server (IHS) and the WebSphere Application Server plug-in for the load balancing and Liberty collectives for service discovery. In this topology, the collective controller constantly updates the WebSphere Application Server plug-in with details of the topology so that IHS knows when each application server joins the collective and is ready to handle requests. When requests are sent to IHS, IHS distributes the load across all the application containers in the swarm. The advantage of this topology over the NGINX and interlock topology is that the collective is aware of what is running in the containers. So, for instance, if the container is running and the application isn’t yet ready to receive the requests, IHS is aware of this and does not send that instance any requests until it is ready.

Before you start:

  1. Run the routingHA.sh script. This script creates the overlay network, sets up three controllers, adds them all to a replica set for high availability, and generates the plugin-cfg.xml of the collective. The script then sets up two IHS instances with the WebSphere Application Server plug-in installed, copies over the plugin-cfg.xml, and generates the keys necessary to communicate with the controllers. Because IHS maps to port 80, you need at least 3 nodes, one for each IHS instance. The routingHA.sh script assumes that there are only 2 nodes with port 80 available.

    ./routingHA.sh

  2. Create as many app instances as you like:

    docker-compose scale app=3

  3. Check the Liberty Admin Center to make sure that your application instances have started. Go to https://<controller-node-ip>:<controller-secure-port>/adminCenter in your browser and click Explore.

  4. Load the application. Go to http://<ihs-node-ip>/ferret/. The Ferret application displays information about the HTTP request and the server.

That covers three potential scenarios to deploy Liberty to your Docker Datacenter. The scenarios would also work on Docker Swarm with only a couple of certificate mounting changes to the Nginx and Interlock example. Stay tuned to future beta releases to keep up-to-date with upcoming Docker support.

Join The Discussion

Your email address will not be published. Required fields are marked *