If IMS Connect and IMS-based applications are critical to your business operations, you’ll know that downtime can often equate to a loss of revenue or a missed service-level agreement. In this article we will show you two strategies using IMS Connect Extensions rules-based routing that you can use to avoid downtime and achieve higher system availability. With these strategies you can better manage the unstoppable growth in client workload whilst preserving performance, avoiding bottlenecks, and removing the interdependency between client applications and the underlying IMS systems.

A forgotten optimization: The connections between IMS Connect and IMS

The first place that most will look when seeking high availability is to the TCP/IP network that sends and receives messages. Here availability is typically achieved with static or dynamic virtual IP addresses (VIPA) or sysplex distributor for TCP/IP load balancing, failover, and the use of shared ports. While sysplex distributor helps meet the challenges of managing your TCP/IP network it does not manage the coupling facility that handles connections between IMS Connect and IMS. On this side of the equation, a single IMS connection (DATASTORE or ODBM target) can easily become a performance bottleneck. Just like adding TCP/IP ports can help you mitigate against connection bottlenecks (recall that each port in IMS Connect is a z/OS Task Control Block (TCB)), we can also add additional connections between IMS Connect and IMS to reduce the chances of a bottleneck.

Adding additional connections between IMS Connect and IMS to reduce the chances of a bottleneck
Adding additional connections between IMS Connect and IMS to reduce the chances of a bottleneck

For OTMA workloads, clients that make requests targeting IMS will specify the name of the IMS DATASTORE they want to use in the IRM IMS Dest ID field. If we want to take proper advantage of additional DATASTORE connections, we can use strategies provided by IMS Connect Extensions rules-based routing to automatically distribute workloads to alternate DATASTORE connections (and not just the one that has been coded into the client’s IRM). A similar advantage can be gained when managing Open Database workloads that use ODBM. In this scenario, the client will specify the connection using an alias supplied in the DRDA message.

Workload agility through rules-based routing in IMS Connect

IMS Connect Extensions supports two key distribution methods:

  • primary and fallback
  • workload balancing

The mechanism for managing these methods is called rules-based routing; with routing rules you can use a mixture of these core techniques to optimize the use of your IMS assets.

Primary and fallback

In primary and fallback, IMS Connect Extensions routes messages to a primary IMS unless that IMS is unavailable (for example, it has been taken down for maintenance). If it is unavailable, workload is rerouted to a designated fallback IMS.

Whilst this technique is primarily useful for maintenance or unplanned outages, it is also useful when attempting to avoid flood conditions. If our primary IMS signals a flood warning, IMS Connect Extensions will automatically move work to the fallback.

IMS Connect Extensions routes messages to a primary IMS unless that IMS is unavailable. If it is unavailable, workload is rerouted to a designated fallback.
IMS Connect Extensions primary and fallback routing

In the example above, our client request contains an IMS destination ID (DATASTORE) of DSLOC1. The rule we have developed for fallback routing is set to match on any incoming workload targeting DSLOC1 but it will alter the destination to DSLOC2 as needed to route traffic to the fallback. What this means is that you don’t have to change your client application to make this work. The client is, in fact, completely unaware of what just happened inside IMS Connect. This technique can also be used for Open Database workloads that communicate via ODBM.

Workload balancing

The second way we can take advantage of additional connections to improve availability is by automatically balancing workload across multiple IMS systems. In this scenario, we simply distribute requests evenly across a set of connections.

IMS Connect Extensions distributes requests evenly across a set of connections.
IMS Connect Extensions workload balancing

In this example, we have three DATASTORE connections to three IMS systems. The incoming workload is requesting DATASTORE DSLOC1, but our routing rule will distribute this work to DSLOC1, DSLOC2, and DSLOC3. Just like the primary and fallback method, the client has no idea that the underlying IMS topology refers to three systems and not just the one that it originally requested. Again, you could equally apply this strategy to Open Database and ODBM.

IMS Connect Extensions also gives you the option to adjust the balance. If you want more work to go to a particular system, simply increase its capacity weight rating. In the following example, we have three systems, divided in a 50:20:30 split. DSLOC1 has more overall capacity, so we have assigned it a weight of 50, leaving the other systems to pick up the remaining work.

IMS Connect Extensions distributes requests across a set of connections according to capacity weight rating.
IMS Connect Extensions workload balancing with capacity weight rating

But what if you experience a change in conditions? What if, during a certain time of day, you need to dynamically move the work somewhere else? You can configure the capacities in several ways, for example, by changing them according to a schedule, or by allowing an operator the ability to manually adjust the capacity on demand. You may also wish to change the capacity based on other factors or unique conditions that arise, for example, using your existing automation to initiate a batch process (for example, via a REXX EXEC) that is triggered by a specific IMS Connect HWS message or by the incoming workload itself. This system might also be effective if you have peak usage times where you need to direct most of your workload to your fastest (but most expensive) systems, but during off-peak times you are happy to use your less responsive systems to reduce costs.

Decoupling via virtual IMS connections

Whilst we have been reviewing these routing techniques, you may have inferred that the name of the DATASTORE or ODBM target supplied by the client doesn’t actually need to match one defined to IMS Connect. Because IMS Connect Extensions provides the ability to map between what the client has requested and what is truly available, it sets the stage for a complete decoupling of the client and the underlying systems. With IMS Connect Extensions routing rules, we can now develop client applications that no longer refer to a specific connection name but refer to a logical or “virtual” connection defined in the rule itself.

IMS Connect Extensions distributing workload between two IMS connections to a single IMS to avoid bottlenecks.
Decoupling the client from the underlying IMS topology

In the example above, the client is requesting GRP1 and the rule distributes that workload across DSLOC1 and DSLOC2. Note that there is no physical DATASTORE connection named GRP1. A decoupled system allows us to make changes to our IMS environment without affecting clients. For logical destination IDs, you can now use any name you wish, for example, PAY, or BANK, or MOBILE, or WEB to categorize the type of work that is being performed rather than the name of a DATASTORE connection you are targeting for your request.

Another thing to notice in this scenario is the use of two data stores pointing at the same IMS. Having two for one gives us the potential for improved performance because each DATASTORE definition in IMS Connect is a separate z/OS Task Control Block (TCB). Doing this reduces the likelihood that a single datastore will become a bottleneck into this IMS. In all our previous scenarios, we could have set this up and had two connections per IMS system (and therefore giving us more “plumbing” into IMS in the event that one pipe becomes slow or blocked).

The benefits of a balanced workload

A balanced workload means that there is a reduced chance of flooding on a single IMS system which might cause an outage. Things become more predictable and your systems become more robust.

There is also a potential for improved performance and a reduced chance of bottlenecks with additional IMS connections. Improved performance doesn’t just mean that the client application receives a response faster – it means you can get more done on your system in a shorter amount of time. You can increase your workload capacity.

We’ve also seen how the use of fallback IMS connections can be beneficial. We can configure the environment to maintain availability during scheduled and unscheduled outages, and actively try to avoid flood conditions by moving workload to a fallback.

The IMS Tools playlist on the IBM Z YouTube channel hosts several instructional videos that show you how to get the most out of IMS Connect and IMS Connect Extensions. To learn more about the routing systems discussed in this article (and what tools you can use to monitor its performance), watch the IBM Z YouTube video IMS Connect Extensions workload routing techniques. The same video is also available on IBM MediaCenter.

Join The Discussion

Your email address will not be published. Required fields are marked *