As a fully supported platform for IBM Integration Bus Version 10.0 production workloads, Docker brings a wealth of simplifications to the administration of the IBM Integration Bus runtime lifecycle. Docker allows for the automatic and repeatable creation of runtime environments, fostering repeatability in testing, and reliability in development and operation.
Both IBM Integration Bus and IBM MQ maintain GitHub repositories of artefacts providing simple means of building and running Docker images with these products. The Dockerfiles and supporting scripts published therein are a great way to get started with combining IBM integration and messaging technology with Docker; however, they are also highly open to customization, through which they can be incorporated into complex develop — test — deploy processes.
We describe here a number of simple steps that allow effective use of IBM Integration Bus in Docker containers, and give examples of specific ways in which the official MQ and IIB Docker projects may be customized. We also provide the full set of customizations as an attachment to this article. However, unlike the official MQ and IIB Docker projects, the scripts in the attachment are provided as is, with no provision for support or warranty.
|iib-docker.zip, 19,224 bytes
Customizing the Official Dockerfiles
A Dockerfile is a set of instructions for building a Docker image, which in turn can be used to start containers. Images are stored in local or remote repositories or registries, and once downloaded to the local repository, they can be run using Docker. Both the IIB and the MQ GitHub projects provide Dockerfiles for running the development editions of the respective products.
To build the image, Docker executes the instructions in the Dockerfile one-by-one. Since each instruction causes a new image layer to be created, and the number of layers one can have in an image may be limited, keeping the number of Dockerfile instructions to the minimum is worth some consideration.
Running IBM Integration Bus and IBM MQ in the Same Container
To run IBM Integration Bus with a default queue manager (i.e., through a local connection), the two products need to execute in the same container (although this is not the only arrangement in which the two products can be used together — see the applicable KnowledgeCenter section on client connections). For this, we may create an image containing both an Integration Bus and an MQ installation.
The FROM instruction allows taking advantage of the layering capability in Docker images; for example, an IBM Integration Bus image could be built on top of an IBM MQ image, making the MQ binaries available in the Integration Bus environment. Assuming we have an image tagged mq:8006 containing a deployment of MQ, we could start the Dockerfile of our IBM Integration Bus image with
The mq:8006 image may in turn depend on another IBM MQ image, say mq:8002, and install the latest fix pack on top of it. The FROM instruction, however, only specifies dependencies between the images themselves, but say nothing about what to run inside a container based on the images once started. The dependent Dockerfile may therefore need to copy its own installation and management scripts to the image, though it may be able to make use of the pre-existing scripts provided by the parent.
Installing Products from Your Own Binaries
The officially published Dockerfiles download the developer editions of the product binaries via HTTP; however, users wishing to run their own licensed editions may wish to source the images from a local repository, for example, through a secure protocol such as SSH. While in theory, we could just use the COPY instruction to copy the binaries from the local filesystem to the image, doing so may leave us with large images containing files that we no longer need, but cannot easily remove — remember that the COPY instruction in a Dockerfile, just like any other instruction, creates a new filesystem layer, and subsequent attempts to remove the file copied earlier may just hide it without freeing up the occupied space.
The solution is to do the full install, including the initial SSH fetch and subsequent deletion of the installation archive, within a single RUN instruction, possibly by calling a Bash script copied to the image earlier:
The COPY instruction above copies all Shell scripts, MQSC scripts, as well as SSH credentials to the image, allowing the install.sh script to make an SSH connection to the Docker host, and fetch an IBM MQ installation image:
Note that the SSHUSER, MQIMAGEPATH, and MQIMAGE environment variables need to be set appropriately before executing the code snippet above. The same technique (and the same keys) can be used in dependent images to retrieve IBM MQ fix pack install images, IBM Integration Bus install images, etc., and the respective software packages may be installed using the methods published in the official Dockerfiles.
Configuring the Installed Products
One of the Dockerfile excerpts above includes a COPY instruction to add .mqsc files from the build host to the image. A similar step is present in the official MQ Dockerfile, and this step allows for the automatic configuration of the queue manager that is created in the container. The management script provided with the official MQ Dockerfile contains the following snippet, which is executed right after the newly created queue manager is started:
This causes all .mqsc scripts copied over from the Docker host to be run on the new queue manager, giving the builder of the image flexibility in customizing the configuration. In a similar spirit, a command could be added to the iib_manage.sh script shipped with the Dockerfile for IBM Integration Bus to call a user configuration script after starting the new integration node, or perhaps the entire setup procedure be moved to a separate initialization script. The scripts attached to this post include iib-init.sh, which we call from our modified iib-manage.sh script, and which in turn calls iib-config.sh to perform some configuration of the IBM Integration Bus runtime.
As our updated IBM Integration Bus Dockerfile builds on the mq:8006 image (for which we are also including the Dockerfile), we’ve made the iib-manage.sh script to also call mq-init.sh from the underlying images, and perform the necessary MQ setup steps.
When needed, it is also possible to manually execute administrative commands on either MQ or IIB as follows:
Docker Data Volumes and Volume Containers
Docker provides a means to persist data beyond the lifetime of a container through volumes and volume containers. Their significance in our use case in part stems from the fact that the MQ queue manager data directory must be stored on a volume or a volume container, and doing the same for the Integration Node data directory can also be useful. A container can be made to use a volume container using the VOLUME instruction in a Dockerfile, or the -v or --volume arguments to the Docker run command.
To run the iib:10007 image (can be built using the Dockerfile supplied with this post under iib/10/runtime, as shown in Section “Building the Supplied Dockerfiles”) using volume containers for the Queue Manager and the Integration Node data directories, one might use the following command:
This will result in the Queue Manager and Integration Node runtime states to be persisted to the mqdata and iibdata volume containers, respectively, causing them to be preserved beyond the lifetime of the myNode container. In other words, if the myNode container is deleted, and a new one is started with a command similar to the one above, it will start from the state in which myNode was shut down.
Note that the new container may not have to be executing the exact same runtime (however, it does have to be able to interpret the stored state); it is possible to start myNode with version 10.0.0.6 of the IBM Integration Bus runtime, and then later shut it down and used the saved state in a new container running IBM Integration Bus 10.0.0.7 or 10.0.0.8. This capability makes fixpack-level migrations trivial.
Running the IBM Integration Toolkit
A careful inspection of the official IBM Integraton Bus Dockerfile reveals that the image is built without installing the IBM Integration Toolkit — the IIB install image is unpacked without the tools subdirectory, in an effort to save disk space. This essentially results in a “runtime-only” image. It is, however, possible to run the Integration Toolkit in Docker, with or without a full GUI, as demonstrated by the Dockerfiles supplied with this post in iib/10/toolkit and iib/10/deploy. Making use of the filesystem layering capability inherent in Docker, one may create a full-featured, Toolkit-capable image by extracting only the tools subdirectory of the IIB installation archive on top of the “runtime-only” image.
The iibtk:10007 image can be built from the Dockerfile in the iib/10/toolkit subdirectory of the attachment. This image was designed to use TightVNC for GUI access, and opens port 5901 for this purpose when run with the following command:
The Integration Toolkit can be started in the image using the following command, then used via a VNC connection to port 5901. The default VNC password is set to p4s5w0rd in iib/10/toolkit/install.sh.
Building and Deploying Integrations Using Ephemeral Containers
As mentioned above, the Toolkit can also be used without a full GUI in automated build processes. The mqsicreatebar command, supplied with the Toolkit, can be used to perform a full build of a Toolkit workspace, using the Toolkit in headless mode. Note that the mqsipackagebar command, which has comparable functionality, but does not require the Toolkit, is preferred over mqsicreatebar, if applicable. In this post, we use mqsicreatebar to demonstrate our ability to satisfy the more complex requirements mandated by this command.
The iibdeploy:10007 image may be built using the Dockerfile provided under iib/10/deploy. It depends directly on iibtk:10007, and its key features are effected by the following lines in the install.sh script invoked during the image build process:
The above excerpt invokes sed twice to adjust the operational scripts inherited from the underlying image layer, as defined by iibtk:10007.
The first invocation simply changes iib-init.sh (recall that this is a script to apply generic initialization to the Integration Bus runtime) to execute build.sh, which is a script defining the build procedure in our new image, instead of starting a VNC server — we don’t need interactive GUI access in this image.
The second invocation simply removes 6 lines after the entry point of the monitor() function defined in iib-manage.sh (inherited from the iib:10007 image, and supplied in iib/10/runtime). The 6 lines being removed in fact constitute the main monitoring loop responsible for preventing the container from shutting down before the user chooses to. This results in the build image immediately performing an orderly shutdown of IIB and MQ as soon as it completed the build process — hence the “ephemeral” attribute.
As the container is shut down, all runtime state is persisted to the volume containers mqdata and iibdata declared on startup:
Note that the above command does not expose any ports; the container is short-lived, and it does not require interaction initiated from the outside world. Once it terminates, the runtime state that it produced and saved during its lifetime can be used to start a “runtime-only” container, with the appropriate port mappings this time (if you’re using the same name for the new container as the old one, myNode in our case, you may need to delete the remnants of the latter using docker rm):
The build process that we’ve implemented in iibdeploy:10007 is described in the build.sh script in iib/10/deploy:
We first clone our workspace through Git, then use xvfb-run to start mqsicreatebar, which we use to build the MQEcho application in our workspace, and package it in a BAR file. Finally, we use mqsideploy to deploy the BAR file to the server1 Integration Server defined in the NODE1 Integration Node, both of which should be running at this point.
After the above steps, the Queue Manager as well as the Integration components are shut down, and the container terminated.
Building Docker Images Using the Supplied Artefacts
The archive attached to this post includes a number of Dockerfiles and supporting scripts that may be used to build the hierarchy of Docker images shown in Figure 1.
|Fig. 1.: Dependencies Among Images Built from the Supplied Dockerfiles|
The hierarchy is rooted in the ubuntu:14.04 image publicly available on Docker Hub, on top of which we build our own set of images supporting MQ and IIB.
The dependent images can be built using the following commands invoked from the root directory of the extracted archive, in the order given. Note that the scripts are not expected to work out-of-the-box, as access credentials (e.g., SSH keys), URLs, filesystem paths, and various other parameters will likely need to be configured. Please carefully inspect and update all scripts appropriately before starting the build process.
Executing the above should result in the addition of the following images to your Docker registry:
Once the images have been built, they can be run using the example commands given above in the article.
|iib-docker.zip, 19,224 bytes