Authors: Paul Titheridge and Pete Siddall
While there are restrictions, with considered design it is possible for application server activation specifications to work reliably with a highly available IBM MQ messaging configuration to ensure reliable and available message-driven bean (MDB) processing. Here we look at the pitfalls and technologies which can be used to mitigate them.
There are two types of affinity which need to be considered here. The first is during active processing. The activation specification mechanism is multi-threaded: a ‘browse’ thread finds new work, which is handed off, by reference, to a ‘worker’ thread. A similar situation occurs where the activation specification is configured against a topic: the ‘browse’ and ‘worker’ threads share a subscription to ensure access to the same set of published messages. We need to ensure that both these threads connect to the same queue manager. The second affinity happens during restart recovery from failure conditions where global transactions are concerned – the application server must reconnect to the same resource managers (for MQ this is the same queue manager instance) to correctly resolve any global transactions which were in-doubt at the time of failure.
Using port sprayers or load balancers, such as smart routers, are not supported, as they can route connections to different queue managers without MQ or the application server being aware of it, breaking the affinities needed. Increasing the value of SHARECNV can help a bit here, as it reduces the need to create new connections (channel instances) – this may be a viable solution where there is a known workload and only local transactions are involved. However, that can lead to performance issues as there could be lots of conversations on a single connection.
Use MQ CCDTs: When the activation specification starts up, it parses the CCDT and finds a channel definition to use. All server sessions that are subsequently created for use with that activation specification reuse the same channel definition. This ensures that the main activation specification thread (the ‘browse’ thread) and the threads running it’s server sessions (the ‘worker’ threads) all connect to the same queue manager.
Use a connection list within a CCDT entry for the different IP addresses where a queue manager can be started: Various technologies exist to restart an instance of an MQ queue manager in a different place with a different network address, eg Multi-Instance queue manager (MIQM), MQ Appliance, container management software such as Kubernetes. In this scenario, the relevant client connection channel definitions in the CCDT should be set up so that the connection name (CONNNAME) contains entries for the different IP addresses hosting the queue manager instance. When the activation specification starts up, it will parse the CCDT, find an entry and then use the connection name information to connect to the active queue manager instance.
When the queue manager fails over, then after the queue manager restarts, the application server must reconnect to the same queue manager now live at a different address – the connection name list ties the same queue manager name to the different IP addresses.
Do not use queue manager groups within the CCDT: Because after a failure the application server must resynchronize with the previously used queue manager.
Specific z/OS technologies
When the queue manager is running on z/OS, function in MQ and Sysplex can be combined for a particularly resilient solution. For a locally bound application server running on z/OS the application server must bind to a queue manager on the same LPAR as the application server. Internal logic ensures that all connections are to the same queue manager instance.
Where the application server is running off platform:
- Configure the MQ CCDT to target a single DVIPA for the connection, and use the queue sharing group name instead of any individual queue manager in that group.
- Use INDISP(QMGR) listeners to associate the queue sharing group queue managers with the DVIPA configured in Sysplex Distributor and the target of the CCDT defined at the application server. QSGDISP(GROUP) SVRCONN channel definitions ensure that wherever the client is directed by Sysplex Distributor there will be a matching SVRCONN channel definition available. See https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.pro.doc/q003720_.htm#q003720___SharedInboundChannels
- Configure the queue managers to use GROUPUR. This enables resynchronization to be performed with any surviving peer queue manager running in the Sysplex. See https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_9.0.0/com.ibm.mq.pro.doc/q004250_.htm
- Use VIPADISTRIBUTE TIMEDAFFINITY to ensure that all connections from a particular application server are routed to the same queue manager. See https://www.ibm.com/support/knowledgecenter/en/SSLTBW_2.1.0/com.ibm.zos.v2r1.halz001/vdyvipadistributestatement.htm