Message Image  

MQ Performance in a Docker container

 View Only

MQ Performance in a Docker container 

Wed March 04, 2020 12:55 PM

The MQ team has produced a number of blogs on how to get MQ running inside a Docker container, and for anyone that has tried it, you will find a simple method of rapidly deploying an MQ Queue manager across a variety of hardware platforms; but how much cost do you pay in terms of performance for the benefits that Docker containers provide?

This blog will demonstrate the costs involved and what steps can be taken to optimize the performance of MQ within a Docker container. We will also look at the commands required to modify the available code so that you can generate a new image of MQ to run within a container.

MQ and Docker basics
Firstly, I will assume that you have basic familiarity with Docker and that you’ve read Arthur’s introductory blog here:
https://www.ibm.com/developerworks/community/blogs/messaging/entry/introducing_a_docker…

You may have even followed those instructions to run your own MQ Docker container. Before running MQ within Docker and especially in relation to these samples which utilise IBM MQ Advanced for Developers, check the license conditions which can be found by either clicking through:
https://www.ibm.com/developerworks/community/blogs/messaging/entry/downloads?lang=en

Alternatively when running the Docker container in the manner below, set LICENSE=view and view the Docker logs. Only set the environment variable LICENSE=accept when you confirm your acceptance of the licence conditions.

The MQ Docker image ibmcom/mq is available from Docker hub and this can be obtained by:

$ docker pull ibmcom/mq
$ docker run --env LICENSE=accept --env MQ_QMGR_NAME=PERF0 --volume /var/dvm:/var/mqm --publish 1414:1414 --publish 9443:9443 --detach ibmcom/mq

This is a nice easy way to instantiate a simple QM. To run some performance tests, a small number of configuration changes need to be applied to the QM. These can be found in a forked repository of the code used to generate the MQ image above.

Optimize QM for Performance testing
If you are familiar with git, you can retrieve the additional changes, build the image and then run the resulting image in a new container:

$ git clone https://github.com/stmassey/mq-docker mq-docker 
$ cd mq-docker

You should review the Dockerfile at this point as a user is created ‘mqperf’ to support client connection authentication when running performance tests. You will want to at least change the password and perhaps the userId. Channel authentication is disabled in these performance tests. The QM configuration can be viewed in the config.mqsc file

$ docker build --tag mqperf .
$ docker run --env LICENSE=accept --env MQ_QMGR_NAME=PERF0 --volume /var/dvm:/var/mqm --publish 1414:1414 –publish 1420:1420 --publish 9443:9443 --detach mqperf

I tend to always run in detached mode, so that control passes back to the command line. Its worth noting here that I am mapping a local directory /var/dvm that the Docker engine will mount at /var/mqm for use as the storage for messaging logs and data. To view the running container use the following command.

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                                      NAMES
cb5adeb7d147        mqperf              "mq.sh"             2 seconds ago       Up 1 second         0.0.0.0:1414->1414/tcp, 0.0.0.0:9443->9443/tcp, 1420/tcp   mystifying_northcutt

If you want to run a bash terminal within your container to confirm that the QM is indeed running, you can use the container ID as seen above:

$ docker exec -ti cb5adeb7d147 /bin/bash
(mq:9.0.3.0)root@cb5adeb7d147:/# dspmq
QMNAME(PERF0)                                             STATUS(Running)

or you can check the logs to confirm the configuration applied to the QM

$ docker logs cb5adeb7d147  
…
Monitoring Queue Manager PERF0
QMNAME(PERF0)                                             STATUS(Running)
IBM MQ Queue Manager PERF0 is now fully running
…

We earlier opened port 1420 on the container and thus with the MQ QM now up and running, we can run our usual performance tests against the Docker QM in the same way as if we had created a QM on our host machine.

In a future blog post, we will look at conducting performance tests from within a separate Docker container, but for the results presented below we will use our standard performance testing infrastructure as used for distributed MQ Performance reports.

Test Results
We can run the Request/Responder scenario with clients located on remote machines. This scenario is more fully explained in Section 5.1.2 in MPA1 Performance report available here:
https://ibm-messaging.github.io/mqperf/

Here are the results from a 2K Non Persistent workload across three different scenarios:

  1. Baseline – QM installed on Bare Metal xLinux
  2. Docker – QM running within a docker container
  3. Docker (Host network) – QM running within a docker container, but using the host networking layer

Using the 2K message size, all scenarios can fully saturate the CPU of the machine and Docker results are approximately 61% of the bare metal performance. This can be improved to 76% when host networking is utilized; but this loses some of the protection that running with the Docker networking layer can provide.

Increasing the message size to 20K and we can now saturate the 40Gb network in all three scenarios.

The Docker scenario uses 37% more CPU than the bare metal, with the Docker container using host networking using only 16% more. The throughput of the Docker host network scenario performs similarly to the bare metal performance.

Increasing the message size further to 200K and the performance (and CPU) of the Docker scenario using the host network is almost identical to that of the bare metal.

In contrast the encapsulated Docker scenario uses almost 31% more CPU.

We can now look at the Persistent scenarios beginning with the 2K Message size:

The graph tells a similar story to the 2K NP with a clear differential between the three scenarios, with the Docker results at 73% of the bare metal performance. Switching to the use of the host network increases the performance to 83%.

Increasing the persistent message size to 20K:

The Docker results are now at 76% of bare metal performance, and if you run within Docker utilising the host network the performance increases to 82%.

Looking at the 200K scenario:

Here the use of the host network has little effect on the throughput running within a Docker container, both at 90% of the bare metal performance, however, the scenario utilising the host network is using less CPU.

Conclusions
As you can see from the above charts, there is a performance/capacity price to pay when running your messaging scenarios for the many benefits that Docker provides. This equates to approximately 1/3 (throughput or CPU) for the Non Persistent scenarios. If you want to maximize your messaging throughput at a cost of full container isolation, you can switch to using the Host network for your scenario.

If you are running persistent messaging, you may see degradation of approximately 1/4 at your peak throughput unless your message size is very large. Again switching to using the host network will close the gap between the two scenarios, and this benefit will show as a reduction in CPU in those large message scenarios.

Hardware and Software used for this testing
Host – Docker: 17.05; OS: RHEL 7.3; MQ: 9.0.3
Docker Container – OS: Ubuntu 16.04; MQ 9.0.3 Advanced for Developers
Hardware: x3550 M5, 2×14 2.6Ghz, 128GB RAM, 10/40Gb Ethernet, 2x120GB SSD (ServeRAID M5210, RAID-0, 4GB Cache)

Note that in the very latest version of Docker Community Edition RHEL as the host platform is not officially supported.

Entry Details

Statistics
0 Favorited
20 Views
1 Files
0 Shares
3 Downloads
Attachment(s)
pdf file
MQ Performance in a Docker container.pdf   252 KB   1 version
Uploaded - Wed March 04, 2020

Tags and Keywords

Related Entries and Links

No Related Resource entered.