Previously I have blogged on whether z/OS Connect is a viable alternative to a client connection.
With messaging REST API supported by the mqweb server MQ V9.1 (LTS), we have to ask where the messaging REST API support fits in the performance model.
This blog compares 3 configurations, where each configuration runs 10 simulated users from a single client machine and a request payload of less than 50 bytes and a reply payload of 1KB.
The configurations are as follows:
1. MQI client that connects using TLS cipher “TLS_RSA_WITH_AES128_CBC_SHA256” to perform a request/reply transaction.
2. Client sending HTTPS requests to a WAS Liberty server that is configured for z/OS Connect v3.0 Enterprise Edition “2-way” service.
3. Client sending HTTPS requests to a WAS Liberty server that is configured for messaging REST APIs.
The messaging REST measurements using the mqweb server consist of 2 variations:
- Using a unique Lightweight Third Party Access (LTPA) token for each request/reply.
- Using a single LTPA token for all request/reply per user.
The costs shown in the following table are for the WAS Liberty server (either z/OS Connect or the mqweb server) or the MQ channel initiator, as the costs in the MQ queue manager address space and batch servers are similar and form a much less significant proportion of the total cost.
In the case of the z/OS Connect and mqweb configurations, these workloads are typically 99% eligible for offload to specialty processors (zIIP or â€śzIIP on zAAPâ€ť).
|MQI Client||z/OS Connect||mqweb
|Cost after offload
Assume all eligible work is offloaded
- Transaction rates are the combined rates achieved for the 10 simulated users.
- There are no “think-time” delays between requests.
- Round-trip time is per simulated user.
- We have seen no significant degradation in response time when there is sufficient network bandwidth and CPU on the test machines.
- Costs shown are CPU microseconds per request/reply on z14 (equivalent of 3906-706) running z/OS v2r3.
- The MQI client in this example is running a sub-optimal configuration as it is connecting and disconnecting between each request/reply.
- The secret key negotiation process is a relatively lengthy and expensive phase on the client such that the client was CPU constrained.
- Connecting each iteration using cipher â€śTLS_RSA_WITH_AES_128_CBC_SHA256â€ť accounts for 0.97 of the 1.09 CPU milliseconds â€“ a long running connection would be much lower cost and potentially higher throughput.
- The z/OS Connect configuration uses a single process for each simulated user, which allows some degree of session reuse.
- When starting a separate process for each request, the session negotiation time is increased by approximately 180 milliseconds, significantly increasing the round-trip time to 205 milliseconds.
Using the z/OS Connect configuration, there is only a single HTTP POST request that provides the reply message in the response.
When using the mqweb server, it was necessary to make both an HTTP POST request to put the data to the MQ queue and an HTTP DELETE request to get the reply message.
Needing to make multiple requests to achieve the same resulting reply message payload in the mqweb configuration adds both cost and latency to the round-trip â€“ in this case with the effect of 3 times that of the z/OS Connect configuration.
There will be situations where both z/OS Connect and messaging REST API support in the mqweb server is a viable alternative to an MQ Client from a usability perspective â€“ in all measurements, the round-trips had sub-second response times. Many modern scripting languages, such as node.js, Swift and Go, with no native MQ client have rich support for REST API processing.
It should be noted that there is certain functionality that the MQ Client possesses that z/OS Connect and the mqweb server cannot replicate, such as transactional considerations, which may influence the decision of which configuration to use.
The MQ Client application was limited by CPU on the client partner due to the cost of secret key negotiation. Without TLS-cipher protection, the MQ Client is able to process more transactions per second, scaling well until the network bandwidth limits are approached.
The z/OS connect measurements show a less aggressive increase in throughput, but was able to continue scaling with more requester tasks, until also hit by network constraints.
The mqweb server configuration sees a significant improvement in throughput when the LTPA token is reused.
In an environment where network latency is high, the MQ client performance may drop as there are a number of flows between the client and the server. It may be that the REST API is less impacted by network latency as there are typically less flows between the requester and the WAS Liberty server.
The costs observed in the client configuration can be significantly reduced if the client is able to connect once, potentially open the queues once, then process multiple messages before closing queues and disconnecting. As a guide, more than 70% of the small message cost in the MQ channel initiator is related to MQCONN and MQOPEN â€“ an overhead which rises further when SSL/TLS encryption is used on channels.
Further cost savings can be made in the MQ Client configuration by suppressing the CSQX511I and CSQX512I messages using the â€śSET SYSTEM EXCLMSG(X511,X512)â€ť command. This was of the order of 130 microseconds per transaction. To put this into context, if these X511/X512 messages were suppressed, the client transaction cost reduces to 880 microseconds, compared with the z/OS Connect cost of 55 microseconds and 153 microseconds for the REST API (based on all eligible code running on zIIP).