Recently we have received a number of questions relating to the performance of zEDC compression when used with archive logs on MQ for z/OS.

Previously we have reported that there was a significant cost impact to the queue manager address space with the enablement of the feature, and this was notable as the cost was not observed in other IBM products such as Db2.

To recap, the primary purpose of zEDC compression on MQ archive logs is to reduce storage occupancy requirements relating to archive datasets.

One of the side effects of using zEDC compression on our systems was that the reduced load on our IO subsystem meant a reduction in “disk fast write bypass” (DFWBP), as reported in the RMF Cache Subsystem Report. This reduction resulted in shorter response times when performing IO for the MQ log writes, which in turn meant that the queue manager was able to support a higher volume of persistent work.

We revisited the archive log measurements, using measurements that would drive the MQ log task at a consistent rate, regardless of the compressibility of the log data, to determine whether the cost increase previously reported was related to the zEDC compression or whether it was related to the additional volume of work processed by the queue manager.

Having seen the impact was significantly less than previously reported we revisited the original measurements on both z13 and z14 and identified an error in the calculations!

In the original measurements, the cost per MB calculated for the measurements when zEDC compression was not available (baseline) used the peak log rate for the measurement rather than the average log rate.

For the larger message workloads in the baseline measurements, there was a significant variance between the average and peak log rates, which resulted in significantly overstating the impact of zEDC compression to MQ archive logs, particularly with messages of 32KB or larger.

Messages smaller than 32KB are less impacted by the degree of compression that can be applied to the message data as the MQ implementation headers are a larger proportion of the logged data.

Re-calculating the impact using the average log rate with the MQ log task running at capacity – and benefitting from the reduced “disk fast write bypass” side-effect, gives a range of impact, based on message size and compressibility, of between 0 and 10%.

Impact to queue manager cost per MB with increasingly compressible messages using zEDC compression with archive logs.

Notes on chart:

  • The chart shows the percentage increase in cost per MB when varying the compressibility (0 to 80% compressible) of the message data for a range of message sizes.
  • In the worst case, 1MB incompressible messages showed an increase of 10% in the cost to the queue manager per MB of persistent data processed.
  • When the 1MB message was 80% compressible, the impact to the cost per MB was approximately 1.4% over the baseline measurement.
  • With 4MB messages with 80% compression, we saw 0 cost impact.
  • Messages of up to 16KB showed an increase of 2% or less, regardless of message compressibility, thus are not shown on the chart.
  • In these measurements where the log task was being driven to its limits, we saw up to a 94% increase in peak throughput due to the reduced DFWBP.

Recovery and restart

For the purposes of this blog, I will differentiate between recovery and restart, largely because the use of archive logs is different.

  • Recovery, such as using RECOVER CFSTRUCT, or something relatively catastrophic occurs e.g. media recovery is required, or something has gone wrong with the page sets, may make significant use of the archive logs. Accessing the MQ archives that have been compressed is expensive and slow compared to the non-zEDC configuration. The impact of the “RECOVER CFSTRUCT” can be alleviated by more frequent back-ups, resulting in less data needed to recover.
  • Restart even from failure, however makes less use of the archive logs, so will see less impact from zEDC compression of archive logs and as a result little to no impact to the queue manager restart times.

Recovery

One area where the original performance has not changed is in the recovery of data from archive logs.

When recovering 4GB of data to shared queues, using a range of message sizes from 64KB to 4MB, the following performance observations were made:

Uncompressed archives zEDC compression on archives
Recovery rate (MB/second) 110 38
Cost per MB (CPU microseconds) 740 1148
  • Recovery rate of the compressed archives is 35% of the uncompressed archives.
  • Recovery cost per MB of archive logs is 1.5 times that of uncompressed archives.

Conclusions

  • The impact to the MQ queue manager CPU and SRB time is smaller than previously reported, ranging from 0 to 10%.
  • Both the compressibility of the message and the size of the message can affect how much using zEDC compression will impact the cost to the queue manager, particularly with messages of 32KB or larger.
  • Ensure there are sufficient zEDC features available to the LPARs using compression. In our measurements, we were able to drive a single feature at 70% utilised. Your configuration and use may result in different utilisations, so it may be advisable to measure the usage on test systems with production-like loads concurrently with the other products using zEDC compression in order to ensure sufficient zEDC features are available.
  • In certain environments, the use of zEDC compression can result in less load on the IO subsystem thus reducing disk response times, with the result of an increase in peak log capacity.

Further reading

The following documents have been updated with the revised costs:

Join The Discussion

Your email address will not be published.