As a fully supported platform for IBM Integration Bus Version 10.0 production workloads, Docker brings a wealth of simplifications to the administration of the IBM Integration Bus runtime lifecycle. Docker allows for the automatic and repeatable creation of runtime environments, fostering repeatability in testing, and reliability in development and operation.
Both IBM Integration Bus and IBM MQ maintain GitHub repositories of artefacts providing simple means of building and running Docker images with these products. The Dockerfiles and supporting scripts published therein are a great way to get started with combining IBM integration and messaging technology with Docker; however, they are also highly open to customization, through which they can be incorporated into complex develop — test — deploy processes.
We describe here a number of simple steps that allow effective use of IBM Integration Bus in Docker containers, and give examples of specific ways in which the official MQ and IIB Docker projects may be customized. We also provide the full set of customizations as an attachment to this article. However, unlike the official MQ and IIB Docker projects, the scripts in the attachment are provided as is, with no provision for support or warranty.

iib-docker.zip, 19,224 bytes
SHA-256: E15337AB262FD32D89A49621875500DB9A38F0BD2F9AB5491DBFC31E3F957922
Customizing the Official Dockerfiles
A Dockerfile is a set of instructions for building a Docker image, which in turn can be used to start containers. Images are stored in local or remote repositories or registries, and once downloaded to the local repository, they can be run using Docker. Both the IIB and the MQ GitHub projects provide Dockerfiles for running the development editions of the respective products.
To build the image, Docker executes the instructions in the Dockerfile one-by-one. Since each instruction causes a new image layer to be created, and the number of layers one can have in an image may be limited, keeping the number of Dockerfile instructions to the minimum is worth some consideration.
Running IBM Integration Bus and IBM MQ in the Same Container
To run IBM Integration Bus with a default queue manager (i.e., through a local connection), the two products need to execute in the same container (although this is not the only arrangement in which the two products can be used together — see the applicable KnowledgeCenter section on client connections). For this, we may create an image containing both an Integration Bus and an MQ installation.
The FROM instruction allows taking advantage of the layering capability in Docker images; for example, an IBM Integration Bus image could be built on top of an IBM MQ image, making the MQ binaries available in the Integration Bus environment. Assuming we have an image tagged mq:8006 containing a deployment of MQ, we could start the Dockerfile of our IBM Integration Bus image with
FROM mq:8006
The mq:8006 image may in turn depend on another IBM MQ image, say mq:8002, and install the latest fix pack on top of it. The FROM instruction, however, only specifies dependencies between the images themselves, but say nothing about what to run inside a container based on the images once started. The dependent Dockerfile may therefore need to copy its own installation and management scripts to the image, though it may be able to make use of the pre-existing scripts provided by the parent.
Installing Products from Your Own Binaries
The officially published Dockerfiles download the developer editions of the product binaries via HTTP; however, users wishing to run their own licensed editions may wish to source the images from a local repository, for example, through a secure protocol such as SSH. While in theory, we could just use the COPY instruction to copy the binaries from the local filesystem to the image, doing so may leave us with large images containing files that we no longer need, but cannot easily remove — remember that the COPY instruction in a Dockerfile, just like any other instruction, creates a new filesystem layer, and subsequent attempts to remove the file copied earlier may just hide it without freeing up the occupied space.
The solution is to do the full install, including the initial SSH fetch and subsequent deletion of the installation archive, within a single RUN instruction, possibly by calling a Bash script copied to the image earlier:
COPY *.sh *.mqsc id.rsa host.key /tmp/
RUN ["/bin/bash", "/tmp/install.sh"]
The COPY instruction above copies all Shell scripts, MQSC scripts, as well as SSH credentials to the image, allowing the install.sh script to make an SSH connection to the Docker host, and fetch an IBM MQ installation image:
# find the IP address of the Docker host
HOSTIP=`ip route show 0.0.0.0/0 | grep -Eo 'via \S+' | awk '{ print \$2 }'`

# ...

# set up SSH access for root
mkdir -m 700 ~/.ssh
cat /tmp/host.key | sed s/localhost/${HOSTIP}/ >> ~/.ssh/known_hosts
mv /tmp/id.rsa ~/.ssh/id_rsa
chmod 644 ~/.ssh/known_hosts
chmod 600 ~/.ssh/id_rsa
rm -rf /tmp/host.key

# ...

# fetch and extract MQ install image
mkdir /tmp/mq
cd /tmp/mq
scp ${SSHUSER}@${HOSTIP}:${MQIMAGEPATH}/${MQIMAGE} .
tar -xzvf ${MQIMAGE}

Note that the SSHUSER, MQIMAGEPATH, and MQIMAGE environment variables need to be set appropriately before executing the code snippet above. The same technique (and the same keys) can be used in dependent images to retrieve IBM MQ fix pack install images, IBM Integration Bus install images, etc., and the respective software packages may be installed using the methods published in the official Dockerfiles.
Configuring the Installed Products
One of the Dockerfile excerpts above includes a COPY instruction to add .mqsc files from the build host to the image. A similar step is present in the official MQ Dockerfile, and this step allows for the automatic configuration of the queue manager that is created in the container. The management script provided with the official MQ Dockerfile contains the following snippet, which is executed right after the newly created queue manager is started:
# Turn off script failing here because of listeners failing the script
set +e
for MQSC_FILE in $(ls -v /etc/mqm/*.mqsc); do
  runmqsc ${MQ_QMGR_NAME} < ${MQSC_FILE}
done
set -e
This causes all .mqsc scripts copied over from the Docker host to be run on the new queue manager, giving the builder of the image flexibility in customizing the configuration. In a similar spirit, a command could be added to the iib_manage.sh script shipped with the Dockerfile for IBM Integration Bus to call a user configuration script after starting the new integration node, or perhaps the entire setup procedure be moved to a separate initialization script. The scripts attached to this post include iib-init.sh, which we call from our modified iib-manage.sh script, and which in turn calls iib-config.sh to perform some configuration of the IBM Integration Bus runtime.
As our updated IBM Integration Bus Dockerfile builds on the mq:8006 image (for which we are also including the Dockerfile), we’ve made the iib-manage.sh script to also call mq-init.sh from the underlying images, and perform the necessary MQ setup steps.
When needed, it is also possible to manually execute administrative commands on either MQ or IIB as follows:
docker exec -ti myNode /bin/bash -c dspmq
docker exec -ti myNode /bin/bash -c mqsilist
Docker Data Volumes and Volume Containers
Docker provides a means to persist data beyond the lifetime of a container through volumes and volume containers. Their significance in our use case in part stems from the fact that the MQ queue manager data directory must be stored on a volume or a volume container, and doing the same for the Integration Node data directory can also be useful. A container can be made to use a volume container using the VOLUME instruction in a Dockerfile, or the -v or --volume arguments to the Docker run command.
To run the iib:10007 image (can be built using the Dockerfile supplied with this post under iib/10/runtime, as shown in Section “Building the Supplied Dockerfiles”) using volume containers for the Queue Manager and the Integration Node data directories, one might use the following command:
docker run –name myNode -e LICENSE=accept -e QMGRNAME=QMGR1 -p 1414:1414 -v mqdata:/var/mqm -e NODENAME=NODE1 -e SVRNAME=server1 -p 4414:4414 -p 7800:7800 -v iibdata:/var/mqsi iib:10007
This will result in the Queue Manager and Integration Node runtime states to be persisted to the mqdata and iibdata volume containers, respectively, causing them to be preserved beyond the lifetime of the myNode container. In other words, if the myNode container is deleted, and a new one is started with a command similar to the one above, it will start from the state in which myNode was shut down.
Note that the new container may not have to be executing the exact same runtime (however, it does have to be able to interpret the stored state); it is possible to start myNode with version 10.0.0.6 of the IBM Integration Bus runtime, and then later shut it down and used the saved state in a new container running IBM Integration Bus 10.0.0.7 or 10.0.0.8. This capability makes fixpack-level migrations trivial.
Running the IBM Integration Toolkit
A careful inspection of the official IBM Integraton Bus Dockerfile reveals that the image is built without installing the IBM Integration Toolkit — the IIB install image is unpacked without the tools subdirectory, in an effort to save disk space. This essentially results in a “runtime-only” image. It is, however, possible to run the Integration Toolkit in Docker, with or without a full GUI, as demonstrated by the Dockerfiles supplied with this post in iib/10/toolkit and iib/10/deploy. Making use of the filesystem layering capability inherent in Docker, one may create a full-featured, Toolkit-capable image by extracting only the tools subdirectory of the IIB installation archive on top of the “runtime-only” image.
The iibtk:10007 image can be built from the Dockerfile in the iib/10/toolkit subdirectory of the attachment. This image was designed to use TightVNC for GUI access, and opens port 5901 for this purpose when run with the following command:
docker run –name myNode -e LICENSE=accept -e QMGRNAME=QMGR1 -p 1414:1414 -v mqdata:/var/mqm -e NODENAME=NODE1 -e SVRNAME=server1 -p 4414:4414 -p 5901:5901 -p 7800:7800 -v iibdata:/var/mqsi iibtk:10007
The Integration Toolkit can be started in the image using the following command, then used via a VNC connection to port 5901. The default VNC password is set to p4s5w0rd in iib/10/toolkit/install.sh.
docker exec -ti myNode /bin/bash -c "DISPLAY=\":1\" /opt/ibm/iib-10.0.0.7/tools/eclipse"
Building and Deploying Integrations Using Ephemeral Containers
As mentioned above, the Toolkit can also be used without a full GUI in automated build processes. The mqsicreatebar command, supplied with the Toolkit, can be used to perform a full build of a Toolkit workspace, using the Toolkit in headless mode. Note that the mqsipackagebar command, which has comparable functionality, but does not require the Toolkit, is preferred over mqsicreatebar, if applicable. In this post, we use mqsicreatebar to demonstrate our ability to satisfy the more complex requirements mandated by this command.
The iibdeploy:10007 image may be built using the Dockerfile provided under iib/10/deploy. It depends directly on iibtk:10007, and its key features are effected by the following lines in the install.sh script invoked during the image build process:
# edit operational scripts

# instead of starting vncserver in iib-init.sh, run our build script
sed -e "s|^.*USER=mqm vncserver.*\$|/usr/local/bin/build.sh|g" -i /usr/local/bin/iib-init.sh

# delete 6 lines after "monitor() {" in iib-manage.sh
sed -e "/^monitor[(][)] *[{]\$/ {N;N;N;N;N;N;s/^.*\$/monitor\(\) \{/g}" -i /usr/local/bin/iib-manage.sh

The above excerpt invokes sed twice to adjust the operational scripts inherited from the underlying image layer, as defined by iibtk:10007.
The first invocation simply changes iib-init.sh (recall that this is a script to apply generic initialization to the Integration Bus runtime) to execute build.sh, which is a script defining the build procedure in our new image, instead of starting a VNC server — we don’t need interactive GUI access in this image.
The second invocation simply removes 6 lines after the entry point of the monitor() function defined in iib-manage.sh (inherited from the iib:10007 image, and supplied in iib/10/runtime). The 6 lines being removed in fact constitute the main monitoring loop responsible for preventing the container from shutting down before the user chooses to. This results in the build image immediately performing an orderly shutdown of IIB and MQ as soon as it completed the build process — hence the “ephemeral” attribute.
As the container is shut down, all runtime state is persisted to the volume containers mqdata and iibdata declared on startup:
docker run –name myNode -e LICENSE=accept -e QMGRNAME=QMGR1 -v mqdata:/var/mqm -e NODENAME=NODE1 -e SVRNAME=server1 -v iibdata:/var/mqsi iibdeploy:10007
Note that the above command does not expose any ports; the container is short-lived, and it does not require interaction initiated from the outside world. Once it terminates, the runtime state that it produced and saved during its lifetime can be used to start a “runtime-only” container, with the appropriate port mappings this time (if you’re using the same name for the new container as the old one, myNode in our case, you may need to delete the remnants of the latter using docker rm):
docker run –name myNode -e LICENSE=accept -e QMGRNAME=QMGR1 -p 1414:1414 -v mqdata:/var/mqm -e NODENAME=NODE1 -e SVRNAME=server1 -p 4414:4414 -p 7800:7800 -v iibdata:/var/mqsi iib:10007
The build process that we’ve implemented in iibdeploy:10007 is described in the build.sh script in iib/10/deploy:
. /usr/local/bin/envvars.sh
cd ~
mkdir git
cd git
git clone ssh://${SSHUSER}@${HOSTIP}/home/${SSHUSER}/git/InterConnect2017.git
cd ~/git
xvfb-run mqsicreatebar -data ~/git/InterConnect2017/Docker/ -b ~/git/mqecho.bar -a MQEcho -deployAsSource
mqsideploy NODE1 -e server1 -a ~/git/mqecho.bar -m
We first clone our workspace through Git, then use xvfb-run to start mqsicreatebar, which we use to build the MQEcho application in our workspace, and package it in a BAR file. Finally, we use mqsideploy to deploy the BAR file to the server1 Integration Server defined in the NODE1 Integration Node, both of which should be running at this point.
After the above steps, the Queue Manager as well as the Integration components are shut down, and the container terminated.
Building Docker Images Using the Supplied Artefacts
The archive attached to this post includes a number of Dockerfiles and supporting scripts that may be used to build the hierarchy of Docker images shown in Figure 1.
Fig. 1.: Dependencies Among Images Built from the Supplied Dockerfiles
The hierarchy is rooted in the ubuntu:14.04 image publicly available on Docker Hub, on top of which we build our own set of images supporting MQ and IIB.
The dependent images can be built using the following commands invoked from the root directory of the extracted archive, in the order given. Note that the scripts are not expected to work out-of-the-box, as access credentials (e.g., SSH keys), URLs, filesystem paths, and various other parameters will likely need to be configured. Please carefully inspect and update all scripts appropriately before starting the build process.
docker build -t mq:8002 mq/8002
docker build -t mq:8006 mq/8006
docker build -t iib:10007 iib/10/runtime
docker build -t iibtk:10007 iib/10/toolkit
docker build -t iibdeploy:10007 iib/10/deploy
Executing the above should result in the addition of the following images to your Docker registry:
REPOSITORY  TAG    IMAGE ID      CREATED         SIZE
iibdeploy   10007  28db907c9af2  5 minutes ago   3.6 GB
iibtk       10007  6967a9ab4e68  6 minutes ago   3.6 GB
iib         10007  59628a31c333  31 minutes ago  2.69 GB
mq          8006   a12a10cfbbdc  34 minutes ago  1.75 GB
mq          8002   5a2b364d2276  38 minutes ago  803 MB
ubuntu      14.04  b969ab9f929b  2 months ago    188 MB
Once the images have been built, they can be run using the example commands given above in the article.

iib-docker.zip, 19,224 bytes
SHA-256: E15337AB262FD32D89A49621875500DB9A38F0BD2F9AB5491DBFC31E3F957922

12 comments on"IBM Integration Bus and Docker: Tips and Tricks"

  1. Jason Morrison December 05, 2017

    Any guidance on amending the base Dockerfile to create a user using the mqsiwebuseradmin command?

  2. Srikanth_86 October 03, 2017

    Hello,

    I was trying to implement docker on Windows 7 Enterprise edition and having trouble with host.key file. Could you please let me know what should be the content of this file? I had generated id_rsa key for Windows using GitHub UI, however not sure on how to generate host.key file. do we need to add host public key to this file? Could you please share an example.

  3. Sam Rogers April 04, 2017

    FYI – sharing resources e.g. ipc namespaces with –ipc, to use bindings connections to a queue manager in a separate QM container is not supported by IBM at this point in time. Best to stick to the described method of running IIB and MQ inside the same container.

  4. Great post. Any insights if IBM will be working to bring Docker to AIX?

  5. ALAN O'NEILL April 04, 2017

    Docker doctrine has always been “one container, one process”. Interested to know if you are aware of any drawbacks of running IIB and MQ in the same container?

    • Geza Geleji April 04, 2017

      Given that MQ and IIB are both already running multiple processes, I would not be too concerned about running the two together. Quote from Best practices for writing Dockerfiles (retrieved 04 Apr 2017, 14:21 BST):

      You may have heard that there should be “one process per container”. While this mantra has good intentions, it is not necessarily true that there should be only one operating system process per container. In addition to the fact that containers can now be spawned with an init process, some programs might spawn additional processes of their own accord. For instance, Celery can spawn multiple worker processes, or Apache might create a process per request. While “one process per container” is frequently a good rule of thumb, it is not a hard and fast rule. Use your best judgment to keep containers as clean and modular as possible.

      Although modularity is slightly hurt by bundling IIB and MQ together, it brings with it an important benefit: the ability to use server bindings to connect the two. If this is not needed, then it may be wiser to use them separately through client connections.

      • BenThompsonIBM April 04, 2017

        Hi. Just to add to what Geza said here, in addition to the current product capabilities outlined here, we are also looking very seriously at the IIB process hierarchy (bipbroker/dataflowengine and the responsibilities of each process) as we develop towards future product releases. For details on our current thinking in this space (and also to get access to test drivers) I’d encourage interested parties to sign up to the IIBvNext Early Access Program.

        • Harisankar G April 11, 2017

          Hello Ben,

          Can you please advise on how to sign up for IIBvNext Early Access Program

          • BenThompsonIBM September 11, 2017

            Hi Harisankar,
            Apologies for the slow response – somehow I missed the auto notification on this comment! In case you haven’t already found out, your local IBM account team can sort this out for you … or drop an email to the Early Access Program manager, Terry Hudson (terry_hudson@uk.ibm.com), and he can guide you through the process.
            Cheers,
            Ben

Join The Discussion

Your email address will not be published. Required fields are marked *