MQ for z/OS – Simulating InfoSphere Q Replication between z/OS and Linux

 View Only

MQ for z/OS – Simulating InfoSphere Q Replication between z/OS and Linux 

Wed March 04, 2020 01:52 PM

MQ for z/OS – Simulating InfoSphere Q Replication between z/OS and Linux

Tony Sharkey
Published on 02/03/2018

This is the third and final entry in the short series of performance blogs for MQ on z/OS drawing your attention to documents relating to performance updates. In this blog we talk about MQ configuration options, considerations and throughput achieved when using a model simulating InfoSphere Q Replication to replicate data between 2 systems.

For full details of the configurations and results, please see document IBM MQ Performance between z/OS and Linux using Q Replication processing model, which is available on the mqperf github repository.

Why are we looking at this?

In the past, the MQ development team has worked closely with the InfoSphere Q Replication team to optimise the MQ subsytems for DB2 replication using local queues with MCA channels between 2 z/OS queue managers.

Recently we were asked 2 questions:

  1. What is the best configuration if we replace local queue with shared queues?
  2. What if the remote end is not z/OS but is a distributed platform?

The response to these questions are not just applicable to data replication workloads using InfoSphere Q Replication (QREP), so we thought we might share our findings with the wider community.

Since our focus is primarily MQ and for simplicity and repeatability purposes, we use a simulation of QREP, rather than the full environment involving DB2 logs etc.

Shared queue configuration

Taking the first question – shared queues, there are many different options for configuring your shared queue environment. For example, do you store the entire message in the Coupling Facility, thereby limiting message sizes, do you offload to Shared Message Data Sets (SMDS) or Db2, do you use Storage Class Memory (SCM) to provide additional capacity?

The document shows these different options, and you should consider the impact of message size on a workload.

  • If message rate is the key metric, i.e. small messages achieving a higher message rate, then storing the entire message in the CF is the best performing approach. SCM offers capacity benefits and is ideal for this type of workload, i.e. gets in a sequential order.
  • However if throughput is the key metric, where larger messages can achieve higher volumes, using SMDS offload with sufficient buffer allocation, provides the highest throughput in terms of MB/second.

What if the remote end is not z/OS?

In this case, there are 2 options – the applications running on the remote system connect directly using a SVRCONN connection, or the applications connect in bindings mode to a distributed queue manager that in turn connects to the z/OS queue manager using MCA channels.

Perhaps surprisingly, both the client and bindings option have benefits.

The client option offers simplicity and achieves good performance with large messages, but does show increased cost on z/OS plus is impacted more by distance from the z/OS queue manager. Since the client configuration performs logging only on the z/OS queue manager, disk performance is less of an issue.

The bindings configuration shows lower cost on z/OS and less impact from network latency as the channels were able to batch messages. However as the messages were all persistent, our throughput rates were limited by the rate that the distributed queue manager was able to log the data for a single queue.

The following chart shows the rate, in MB per second, that the application on the Linux machine was able to process messages for a range of message sizes.

The chart shows that the bindings mode configuration is generally able to process messages faster than the client configuration, but becomes limited by log rate at around 115MB/sec on this particular Linux partner.

For the client configuration, the impact of increased flows between client and server for each message is more pronounced with small messages than with large messages. With messages of 1MB, there is no log constraint on the Linux partner and the flows between client and server are minimal compared to the actual transportation of the message and subsequently the processing rate is significantly higher.

Conclusion

In summary, shared queues can be used in a QREP configuration with good performance but choosing client or bindings may well depend on your configuration and business requirements.

There are many factors and consideration should be given to your final solution – if your tests systems are co-located, you may get different performance once you move to your production environment where the 2 systems may be many hundreds of miles apart.

 

Entry Details

Statistics
0 Favorited
7 Views
1 Files
0 Shares
10 Downloads
Attachment(s)
pdf file
MQ for zOS – Simulating InfoSphere Q Replication between ....pdf   180 KB   1 version
Uploaded - Wed March 04, 2020

Tags and Keywords

Related Entries and Links

No Related Resource entered.