With the release of Docker 1.12, building a swarm of Docker engines has become much easier and container orchestration is now built-in. This article explains how to build your Docker swarm, create and scale a simple web application service built with Liberty, and then orchestrate a rolling update of the service. It also covers healthchecks, a feature introduced in Docker 1.12.

A Docker swarm is a self-healing group of Docker engines that allow you to distribute your containers across multiple hosts to provide high availability and scalability for your application. Swarms are secure by default and use mutually authenticated TLS to provide authentication, authorisation and encryption for the communications of every node in the swarm.

A service can be thought of as a replicated, distributed, load-balanced, process on a swarm of Docker engines. Engines in swarm mode are self-healing so they will continuously monitor and reconcile the runtime state of services with the current declaration. This means that if a node fails the swarm will recreate, on a healthy node, all the containers that were on the unhealthy node. The number of instances of the service can be scaled up and down and updates to the service definition can be rolled out.

Before you start

Make sure you have installed:

  • Docker Machine
  • Docker
  • VirtualBox

Creating your swarm

To create your three machines run the following commands:

docker-machine create -d virtualbox node1

docker-machine create -d virtualbox node2

docker-machine create -d virtualbox node3

Initialise the swarm

Run the following command to initialise the swarm:

docker-machine ssh node1 docker swarm init --advertise-addr $(docker-machine ip node1):2377

The init command returns a token. Use this token to join your other two nodes to the swarm as workers using the following commands:

docker-machine ssh node2 docker swarm join --token <token-provided-by-init> $(docker-machine ip node1):2377

docker-machine ssh node3 docker swarm join --token <token-provided-by-init> $(docker-machine ip node1):2377

node1 is the leader, so point your command line at it by defining which Docker engine you are running commands against:

eval $(docker-machine env node1)

To see your container orchestration in action, run Docker’s visualizer tool:

docker run -it -d -p 8080:8080 -e HOST=$(docker-machine ip node1) -v /var/run/docker.sock:/var/run/docker.sock manomarks/visualizer

Navigate to the visualizer where you should see three empty nodes:

http://<node1-ip>:8080


$10EC2EB56B0545D8

Create a service using the WASdev Ferret sample image, exposing port 9080 on all of the Swarm nodes:

docker service create --name ferret -p 9080:9080 wasdev/ferret:1.1 

Verify that the server is running by viewing the Ferret servlet:

http://<node1-ip>:9080/ferret

When declaring services, Docker exposes the port on each of the nodes even if there isn’t a container with that port running on that node. You can see this if you load the Ferret servlet on any of the nodes’ IP addresses.

Scale up the service to five instances:

docker service scale ferret=5

Check the Docker visualizer to see how Docker has distributed these containers across your nodes:


$C0AF11B92C192F6

To verify that Docker is distributing the load across the different instances keep refreshing the Ferret servlet and you should see the localHost value cycle through your container IDs:

http://<node1-ip>:9080/ferret

We will now update the service to the wasdev/ferret:1.2 image using a rolling update of two containers at a time with a 30 second delay:

docker service update --image wasdev/ferret:1.2 --update-delay 30s --update-parallelism 2 ferret

To observe this rolling update keep refreshing the servlet update and you will see some containers are running wasdev/ferret:1.1 with a white background while others are running wasdev/ferret:1.2 with a dark blue background:

http://<node1-ip>:9080/ferret

Healthchecks

Healthchecks are a new feature in Docker 1.12. They allow containers to declare to the swarm when they are ready to receive incoming requests. Healthcheck commands can be added to the container at runtime, with a docker run --health-cmd or at build time with the HEALTHCHECK keyword in the Dockerfile. Both the wasdev/ferret:1.1 and wasdev/ferret:1.2 images have very simple healthchecks implemented at build time. The scripts used are included below for reference

Dockerfile:

FROM websphere-liberty

# Install curl and then clean up after (Healthcheck uses curl)
RUN apt-get update 
    && apt-get install -y curl 
    && rm -rf /var/lib/apt/lists/*

COPY ferret-1.2.war /config/dropins/ferret.war

# Add healthcheck file to call then set it as the healthcheck
ADD healthcheck /opt/ibm/docker
HEALTHCHECK CMD /opt/ibm/docker/healthcheck

Healthcheck Script:

#!/bin/bash

# Curl the Servlet URL for a 200 response code
response=$(curl -sL -w "%{http_code}" localhost:9080/ferret -o /dev/null)

# If the response is 200 (OK) then healthy (exit 0) else unhealthy (exit 1)
if [ $response -eq 200 ]
then
    exit 0
else
    exit 1
fi

That covers swarm mode, the major piece of functionality added to Docker with the release of 1.12.

1 Comment on "Deploying Liberty applications using Docker 1.12 and swarm mode"

  1. […] of 30 seconds so expect it to take some multiple of 30 seconds for each task to start. Liam’s WASdev article talks more about the healthcheck and also demonstrates how to rollout an update. Here I’m […]

Join The Discussion

Your email address will not be published. Required fields are marked *