This section shows results achieved running the different use cases on the xLinux 64 bits platform.

Before using performance information, be sure to read the general information under Notices.

The results in this section were obtained by running sufficient copies of each message flow so that in most cases the system CPU utilisation was 80% or greater.

Information provided

The results provided in this section include the following performance data:

Message Size:
Records the approximate size of the message that is used as input to the test, not including the message header. This is the size of the XML or equivalent non-XML message payload.
Persistent State:
Indicates whether the messages used in the test is persistent or not.This state can have the following two values:

  • The value Full Persistent is used to indicate that the message tested is persistent.
    • This value is applicable only to WebSphere MQ messages.
    • If a message is persistent, WebSphere MQ ensures that the message is not lost when a failure occurs, by copying it to disk.
  • The value Non Persistent is used for other types of messages.
Message Rate:
Indicates the number of round trips or message flow invocations per second.
% CPU Busy:
Indicates the percentage of CPU usage on the server machine. This includes the total of CPU used by all processes: IBM Integration Bus, WebSphere MQ queue manager, database manager and others. The rate is expressed as a percentage of the CPU capacity that is used by all processors on the server machine.
CPU ms/msg:
Indicates the overall CPU cost per message, that is, the CPU milliseconds per message.

  • You can calculate the value of the CPU cost per message by using the following formula:
    • ((Number of cores * 1000) * (% of CPU / 100)) / Message Rate.
  • This cost includes IBM Integration Bus, WebSphere MQ, DB2, and any operating system costs.
  • Note: The results are specific to the system on from which they have been obtained. If you want to project (or predict) message processing capacity for other systems, you must make a suitable adjustment to allow for differences in the capacity of the two systems.

Performance Results

Typically, as the message size increases, the message rate decreases, and the cost of CPU per message increases.
Persistent MQ messages are written to the MQ log on disk. This causes an overhead in CPU and IO costs and a reduction in message rate. The speed of disk on which the MQ log is configured becomes a key factor. See Tuning for more information.
For details on the measurement environment, see Measurement environment.

When planning a system, it is important to understand the complexities of the processing required so that adequate resources can be provided to meet the requirements of the particular situation.



Aggregation

Non Persistent Full Persistance
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 1191.6 98.7 6.6 895.3 95.7 8.6
2kB 1136.9 98.5  6.9 872.4 95.8 8.8
20kB 783.5 97.1 9.9 635.4 95.3 12.0
200kB 182.3 95.3 41.8 170.1  96.5 45.4
2000kB 19.3 98.6 407.7 18.3 98.1 427.9
20000kB 1.0 97.5 8125.0 0.9 97.5 8573.2

Coordinated Request/Reply

Non Persistent Full Persistance
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 1334.9 89.0 5.3 1103.7 85.8 6.2
2kB 1007.3 89.7 7.1 846.7 95.6 9.0
20kB 329.5 96.2  23.4 314.2 96.0 24.4
200kB 45.3 98.9 174.8 44.6 98.7 177.1
2000kB 4.6 96.2 1680.0 4.6 96.2 1680.9
20000kB 0.4 95.0 17278.2 0.5 95.6 15600.0

Large Messaging

Non Persistent Full Persistance
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 7380.0 95.1 1.0 4520.8 90.3 1.6
2kB 5068.9 95.1 1.5 3212.4  91.7 2.3
20kB 901.2 95.8  8.5 610.4 94.1 12.3
200kB 97.5 95.4 78.3 72.2 94.1 104.2
2000kB 9.3 95.6 826.1 6.9 93.9 1084.4
20000kB 0.9 95.1 8272.2 0.7 91.2 10422.9

Message Routing

Non Persistent Full Persistance
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b  16021.2 94.9 0.5 8363.2 87.7 0.8
2kB 15426.0 93.1 0.5 8121.8 87.7  0.9
20kB 13172.7 91.2 0.6 6901.8  83.1  1.0
200kB 3162.3 53.9 1.4 2009.5 64.5 2.6
2000kB 206.5 32.9 12.8 194.4 62.5 25.7
20000kB 21.0 49.9 190.2 15.7 59.9 305.9

Transforming a message

Non Persistent Full Persistance
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 13156.7 95.5 0.6  7234.1 87.3 1.0
2kB 7995.6 95.8 1.0 5078.3 87.5 1.4
20kB 1634.5 97.6 4.8 1410.6 96.0 5.4
200kB 176.9 99.0 44.8 172.0 98.2 45.7
2000kB 17.1 99.2 463.6 16.9 99.0 469.7
20000kB 1.8 99.4 4391.2 1.7 99.0 4588.7

File out and file in

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 4570.2 96.6 1.7
2kB 4483.0 96.1 1.7
20kB 4079.1 93.3 1.8
200kB 1518.3 77.9 4.1
2000kB 120.4 89.9 59.6
20000kB 10.7 93.1 697.7

SOAP Consumer

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3212.1 91.4 2.3
2kB 2587.2 91.4 2.8
20kB 1156.1 95.7 6.6
200kB 178.9 96.0 42.9
2000kB 17.3 95.9 443.8
20000kB 1.7 96.7 4550.6

SOAP Provider

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b  7496.5 98.1 1.0
2kB 6031.4 97.7 1.3
20kB 2406.5 96.6 3.2
200kB 364.2 98.1 21.5
2000kB 35.4 96.8 218.4
20000kB 3.3 96.1 2344.9

ISO 8583 Transformation

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
120B 2864.4 95.4 2.7
134B  2139.0 94.9 3.3
1384B 1362.3 94.0 5.5

RESTful API –¬†Post

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3533.0 95.6 2.2
2kB 2198.3 96.1 3.5
20kB 511.4 96.2  15.1

RESTful API – Get

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3996.0 95.3 1.9
2kB 2903.6 95.8 2.6
20kB 829.2 97.9  9.4

Measurement Environment

All throughput measurements where taken on a single server machine. The client type and machine on which they ran varied with the test. The details are given below.

Server Machine

The hardware consisted of:

  • IBM xSeries x3850 X6 with 1 x¬†Intel(R) Xeon(R) CPU E7-4820 v2
  • 2.00GHz processors with HyperThreading turned off
  • ServeRAID M5210 ¬†SAS/SATA Controller with¬†4GB Flash/RAID 5 Upgrade option (47C8668)
  • 136GB 15K 6.0Gbps SFF Serial SCSI / SAS Hard Drive – ST9146853SS x2 (mounted directly)
  • IBM 120GB 2.5in G3HS SATA MLC Enterprise Value SSD – 00AJ395 – x2 (Configured in RAID0)
  • IBM 200GB SAS 2.5in MLC SS Enterprise SSD – 49Y6144¬†– x2 (Configured in RAID0)
  • 32 GB RAM
  • Emulex Dual Port 10GbE SFP+ VFA IIIr

The software consisted of:

  • Red Hat Enterprise Linux Server release 6.6
  • WebSphere MQ V7.5.0.5
  • IBM Integration Bus V10.0.0.2
  • DB2 v10.5.0.5

Client Machine

The hardware consisted of:

  • IBM xSeries x3650 M3 with 2 x Hex-Core Intel(R) Xeon(R) X5660
  • 2.80GHz processors with HyperThreading turned on
  • One 135 GB SCSI hard drive formatted with NTFS
  • 16 GB RAM
  • 10 GB Ethernet card

The software consisted of:

  • Microsoft Windows Server 2008 R2
  • WebSphere MQ V7.5.0.1
  • IBM Java v1.7.0

Network Configuration

The client and server machines were connected using a full duplex 10 Gigabit Ethernet LAN with a single hub.

6 comments on"xLinux Performance Report Results"

  1. Teresa Lam March 04, 2019

    Is there a performance report for ACE v11 please?

    • Hi Teresa,

      We are currently working on the first draft of a V11 performance report. This will take a difference shape to the previous reports to start with. As a rule of thumb we expect V11 to perform no worse than V10 and there are some areas we know there are good improvements i.e. HTTP message handling

  2. If you run the Performance Rating Tool on the Test Server, what is the
    – Rating Value: ?
    – Average Core Value: ?

    https://developer.ibm.com/integration/blog/2015/11/21/perfrating-cpu-performance-rating-tool/

    This information will help us adjust our sizing estimates better.

    Thank you.

    • The output from the Perf Rating tool on this machine is:

      Took 294 seconds to run
      Rating Value:5444
      Average Core Value:680

      NOTE: This tool was developed as a indication of CPU capability and should not be depended on for accurate sizings. The nature of a flow may alter how it runs from one machine to another i.e. certain types of calculations / IO requirements etc

  3. James Berube November 13, 2015

    Is the number for the 1384B message size test under ISO 8583 Transformation correct at 13662.3? It looked like there may be an extra 6 in there to me because of the jump in numbers.

    • Thank you James, you are correct – it should have read 1362.3. I have now corrected this.

Join The Discussion

Your email address will not be published. Required fields are marked *