In IBM Integration Bus v9 and v10 a user would typically create a high availability solution in the following ways:
Multi-instance integration nodes with IBM MQ
An existing high availability manager, for example HACMP, Veritas Cluster Server (VCS ).
With the ability to run ACE in Docker and Kubernetes environments we can take advantage of one of the main benefits of the Kubernetes container orchestration engine, i.e. how it brings greater reliability and stability to distributed applications, through the use of dynamic scheduling of containers.
At the application level it’s the pods that provide high availability. Kubernetes allows you to run multiple pods (redundancy) and in the event of a pod failure, Kubernetes will spin up a replacement pod.
In this article we demonstrate how to define replicas for your ACE deployment such that the Kubernetes platform ensures that the defined number of pods are always running.
We also demonstrate that in the event of one of the pods or containers crashing/going down for some reason how another copy is spawned automatically. This way you can ensure the availability of your services at all times.
- Select the ACE service Helm chart from the ICP Catalog.
- Click the Configure button to define the values for configurable parameters of your Helm chart.
Along with various Integration Server related parameters, you will find an option called ‘Replica Count’ as highlighted below. This is the count that represents the number of PoDs of your application that you want to have running all the time. You can set this value to 1 or more. For demonstration purposes we have set it as 2.
- Click Install to complete your deployment.
- From the menu navigate to Workloads → Deployments. You will see the current information on the number of replicas running, the name of PoDs and the host where they are running.
- To simulate the failure of a Pod, we purposely remove one Pod from the Action Menu. Here we select the first Pod entry in the table below.
- As soon as you remove the Pod, a new Pod is spawned. You can observe that there is a new Pod Name, new Pod IP which indicates that the new Pod is up and ready to serve the workload.
- We can simulate another failure situation where a Docker container crashes and show how a new container gets spawned under the Pod.
We login to one of the worker node hosts and list the currently running Docker containers using the ‘docker ps’ command. Note down the container ID of your Docker container running the ACE image.
- To simulate the Docker container failure, we purposely stop the container using ‘docker stop containerID’ and also remove it by running ‘docker rm containerID’.
Instantly, a new Docker container is spawned and gets assigned to the Pod. Take a look at the docker ps output again. You will notice a new Container ID has been assigned to the same Pod.
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c5b30b742a11 mycluster.icp:8500/default/ace11fp01 "/bin/bash -c '/us..." 20 minutes ago Up 20 minutes k8s_acefp01-ibm-ace-prod-fp0_acefp01-ibm-ace-prod-fp0-6fbfb54649-gb2tx_default_1d3ee3f1-c0a5-11e8-8aea-005056a362db_0 2e8c5c3ea416 ibmcom/pause:3.0 "/pause" 20 minutes ago Up 20 minutes k8s_POD_acefp01-ibm-ace-prod-fp0-6fbfb54649-gb2tx_default_1d3ee3f1-c0a5-11e8-8aea-005056a362db_0
root@bceglc249:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a4b874a8e902 mycluster.icp:8500/default/ace11fp01 "/bin/bash -c '/us..." 15 seconds ago Up 14 seconds k8s_acefp01-ibm-ace-prod-fp0_acefp01-ibm-ace-prod-fp0-6fbfb54649-gb2tx_default_1d3ee3f1-c0a5-11e8-8aea-005056a362db_1 2e8c5c3ea416 ibmcom/pause:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD_acefp01-ibm-ace-prod-fp0-6fbfb54649-gb2tx_default_1d3ee3f1-c0a5-11e8-8aea-005056a362db_0
In this article we saw how to configure your ACE deployment for high availability and how Kubernetes ensures the resilience and availability of your application services.