This section shows results achieved running the different use cases on the zLinux platform.

Before using performance information, be sure to read the general information under Notices.

The results in this section were obtained by running sufficient copies of each message flow so that in most cases the system CPU utilisation was 80% or greater.

Information provided

The results provided in this section include the following performance data:

Message Size:
Records the approximate size of the message that is used as input to the test, not including the message header. This is the size of the XML or equivalent non-XML message payload.
Persistent State:
Indicates whether the messages used in the test is persistent or not.This state can have the following two values:

  • The value Full Persistent is used to indicate that the message tested is persistent.
    • This value is applicable only to WebSphere MQ messages.
    • If a message is persistent, WebSphere MQ ensures that the message is not lost when a failure occurs, by copying it to disk.
  • The value Non Persistent is used for other types of messages.
Message Rate:
Indicates the number of round trips or message flow invocations per second.
% CPU Busy:
Indicates the percentage of CPU usage on the server machine. This includes the total of CPU used by all processes: IBM Integration Bus, WebSphere MQ queue manager, database manager and others. The rate is expressed as a percentage of the CPU capacity that is used by all processors on the server machine.
CPU ms/msg:
Indicates the overall CPU cost per message, that is, the CPU milliseconds per message.

  • You can calculate the value of the CPU cost per message by using the following formula:
    • ((Number of cores * 1000) * (% of CPU / 100)) / Message Rate.
  • This cost includes IBM Integration Bus, WebSphere MQ, DB2, and any operating system costs.
  • Note: The results are specific to the system on from which they have been obtained. If you want to project (or predict) message processing capacity for other systems, you must make a suitable adjustment to allow for differences in the capacity of the two systems.

Performance Results

Typically, as the message size increases, the message rate decreases, and the cost of CPU per message increases.
Persistent MQ messages are written to the MQ log on disk. This causes an overhead in CPU and IO costs and a reduction in message rate. The speed of disk on which the MQ log is configured becomes a key factor. See Tuning for more information.
For details on the measurement environment, see Measurement environment.

When planning a system, it is important to understand the complexities of the processing required so that adequate resources can be provided to meet the requirements of the particular situation.



Aggregation

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 3189.5 97.2 1.2 1709.8 86.6 2.0
2kB 2497.8 98.2 1.6 1552.0 93.7 2.4
20kB 1372.0 95.9 2.8 1094.2 95.0 3.5
200kB 216.3 97.5 18.0 178.2 89.8 20.2
2000kB 17.6 86.4 196.8 12.4 81.0 261.7
20000kB 1.1 77.2 2886.7 0.7 66.2 3629.0

Coordinated Request/Reply

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 3362.4 65.9 0.8 2070.8 86.4 1.7
2kB 2111.6 87.4 1.7 1458.3 91.3 2.5
20kB 555.5 87.9 6.3 437.5 90.9 8.3
200kB 64.6 93.0 57.6 53.9 98.0 72.8
2000kB 6.5 97.7 602.0 6.2 95.1 613.5
20000kB 0.6 99.0 6283.8 0.6 97.5 6844.2

Large Messaging

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 17025.0 95.4 0.2 9158.0 91.2 0.4
2kB 10736.8 95.1 0.4 5938.9 81.0 0.5
20kB 1744.7 95.6 2.2 1291.0 89.5 2.8
200kB 182.6 95.5 20.9 145.3 93.8 25.8
2000kB 17.8 94.9 213.5 14.9 90.7 243.3
20000kB 1.7 91.1 2105.4 1.1 81.8 2974.2

Message Routing

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 38444.1 92.7 0.1 17146.5 82.0 0.2
2kB 16949.0 51.6 0.1 15252.3 73.6 0.2
20kB 2541.6 9.2 0.1 2568.9 16.8 0.3
200kB 282.3 4.1 0.6 280.9 6.4 0.9
2000kB 26.5 3.5 5.2 26.9 5.0 7.5
20000kB 2.4 3.6 61.5 2.3 5.6 96.4

Transforming a message

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 26067.4 92.7 0.1 15391.0 91.0 0.2
2kB 14186.6 93.7 0.3 9764.3 88.4 0.4
20kB 2544.5 95.3 1.5 2241.7 93.4 1.7
200kB 262.8 97.2 14.8 251.0 94.5 15.1
2000kB 25.3 97.3 154.1 24.3 95.6 157.3
20000kB 2.3 97.6 1690.7 2.0 96.6 1893.1

File out and file in

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 5166.6 64.3 0.5
2kB 5242.6 67.4 0.5
20kB 2639.0 65.2 1.0
200kB 271.9 20.1 3.0
2000kB 23.8 22.6 38.0
20000kB 2.5 23.3 378.2

SOAP Consumer

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 5790.1 94.4 0.7
2kB 4408.5 94.2 0.9
20kB 1362.1 90.9 2.7
200kB 137.6 79.0 23.0
2000kB 13.1 73.8 225.8
20000kB 0.8 49.1 2516.9

SOAP Provider

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 13416.4 77.5 0.2
2kB 9408.6 76.6 0.3
20kB 2751.9 82.6 1.2
200kB 283.9 72.6 10.2
2000kB 28.5 79.2 111.0
20000kB 2.5 81.1 1291.8

ISO 8583 Transformation

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
120B 4311.2 89.3 0.8
134B 3297.7 90.7 1.1
1384B 2126.7 93.1 1.8

RESTful API – Post

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 5704.2 95.1 0.7
2kB 3522.3 97.1 1.1
20kB 849.2 97.2 4.6

RESTful API – Get

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 6394.6 95.8 0.6
2kB 4724.5 95.9 0.8
20kB 1440.0 95.4 2.6

Measurement Environment

All throughput measurements where taken on a single server machine. The client type and machine on which they ran varied with the test. The details are given below.

Server Machine

The hardware consisted of:

  • LPAR on an IBM z13 consisting of 4 * 2964 processors.
  • SAN comprising:
    • Brocade 8Gb 80 port switches
    • DS8800 storage system
  • 8GB RAM
  • 1GB Ethernet Card

The software consisted of:

  • Red Hat Enterprise Linux Server release 7.2
  • WebSphere MQ V7.5.0.5
  • IBM Integration Bus V10.0.0.2
  • DB2 V10.5.0.3

Client Machine

The hardware consisted of:

  • IBM xSeries x3550 M4 with 2 x Oct-Core Intel(R) Xeon(R) E5-2680
  • 2.7GHz processors with HyperThreading turned on
  • One 135 GB SCSI hard drive formatted with NTFS
  • 32GB RAM
  • 1GB Ethernet Card

The software consisted of:

  • Microsoft Windows Server 2008 R2
  • WebSphere MQ V7.5.0.1
  • IBM Java V7

Network Configuration

The client and server machines were connected using a full duplex 1 Gigabit Ethernet LAN with a single hub.

Join The Discussion

Your email address will not be published. Required fields are marked *