IBM Z APM Connect provides the ability monitor transactions in z/OS subsystems and provide that monitoring data to AppDynamics. This allows customers who have AppDynamics in their enterprise to achieve true end-to-end visibility of their business applications including the pieces of those applications which run on the mainframe.

Figure 1 shows a simple application composed of an HTTP Server, a WebSphere Server, two CICS regions and Db2 on z/OS. The HTTP Server and WebSphere Server are being monitored by AppDynamics agents. To enable visibility into the two CICS regions Db2, these parts of the application must be monitored by IBM Z APM Connect. You can see that there are agents in each of the regions, a single Z APM Connect Container per LPAR, and a Z APM Connect Transaction Tracking Gateway. The Z APM Connect Transaction Tracking Gateway takes all of the z/OS transaction information and sends it onto the AppDynamics controller.

Figure 1 – Z APM Connect Deployment

One question I receive when showing the components required, is “How many Transaction Tracking Gateways do I need?” The answer is one. Then the inevitable question comes “What about HA?”.

The IBM Z APM Connect solution can be deployed for High Availability. This post will show an example of how this is done using the open-source load balancer HAProxy.

This same approach can be taken no matter which load balancer your organization uses. This post will show the specific configuration that was used for HAProxy, but the concepts will apply to any load balancer.

The Transaction Tracking Gateway serves two purposes. First, to receive transaction tracking information from the Z APM Containers deployed on z/OS LPARs and convert these events into AppDynamics calls that augment the existing business application tracking information. Second, the Transaction Tracking Gateway is also responsible for first correlating transaction tracking events from different subsystem, across multiple LPARs, that are part of the same business transaction. This correlation of events requires each of the Z APM Connect Containers to use exactly one Transaction Tracking Gateway.

It is important to understanding this correlation role that the Transaction Tracking Gateway (TTG) plays. It is necessary to configure a pair of TTGs in an active/backup or active/standby manner.

Let’s take a look at how to accomplish this in HAProxy. This example configuration is the most basic way to configure this scenario. This example statement is used in the /etc/haproxy/haproxy.cfg file

listen ttg_listen :5455
    mode tcp
    server  ttgl1 check
    server  ttgl2 check backup

The listen statement tells HAProxy to listen on a particular port, in this case the default TTG listening port of 5455. The mode tcp statement tells HAProxy we want it to operate as a TCP proxy for the TTG. The default is HTTP. Finally, we define the two servers used for the pair of TTGs used for this HA configuration. In this example we have two servers each running a TTG on the default port of 5455. We have the second server defined as backup. This along with the fact that there is no balance statement tells HAProxy to only send traffic to server ttgl1 unless it is not available. If ttgl1 is not listening on 5455, HAProxy will send subsequent connections to ttgl2.

Figure 2 shows how the sample application shown previously changes with the addition of the HAProxy load balancer.

Figure 2 – Z APM Connect with Load Balancer

With this new configuration, if the Active TTG goes down for any reason, subsequent events from the Z APM Containers will automatically be routed to the Backup TTG.

One way to verify that HAProxy is configured correctly is to look at the haproxy_stats page. In Figure 3 you can see that ttgl1 and 2 are both defined. In the Server portion of the table you can see that the server ttgl1 is the Active server (Act), and ttgl2 is the Backup server (Bck). At the time this was captured ttgl2 was down.

Figure 3 – HAProxy stats

As stated previously, this is the most basic way to configure HAProxy to run a pair of TTGs in active/backup mode to provide high availability. In a production environment, you would also want redundancy in the load balancer itself. To configure a pair of HAProxy load-balancers see the HAProxy Architecture Guide.

If you have any questions about configuration of IBM Z APM Connect for high availability, please feel free to contact me.

Join The Discussion

Your email address will not be published. Required fields are marked *