This section shows results achieved running the different use cases on the Windows 64 bits platform.

Before using performance information, be sure to read the general information under Notices.

The results in this section were obtained by running sufficient copies of each message flow so that in most cases the system CPU utilisation was 80% or greater.

Information provided

The results provided in this section include the following performance data:

Message Size:
Records the approximate size of the message that is used as input to the test, not including the message header. This is the size of the XML or equivalent non-XML message payload.
Persistent State:
Indicates whether the messages used in the test is persistent or not.This state can have the following two values:

  • The value Full Persistent is used to indicate that the message tested is persistent.
    • This value is applicable only to WebSphere MQ messages.
    • If a message is persistent, WebSphere MQ ensures that the message is not lost when a failure occurs, by copying it to disk.
  • The value Non Persistent is used for other types of messages.
Message Rate:
Indicates the number of round trips or message flow invocations per second.
% CPU Busy:
Indicates the percentage of CPU usage on the server machine. This includes the total of CPU used by all processes: IBM Integration Bus, WebSphere MQ queue manager, database manager and others. The rate is expressed as a percentage of the CPU capacity that is used by all processors on the server machine.
CPU ms/msg:
Indicates the overall CPU cost per message, that is, the CPU milliseconds per message.

  • You can calculate the value of the CPU cost per message by using the following formula:
    • ((Number of cores * 1000) * (% of CPU / 100)) / Message Rate.
  • This cost includes IBM Integration Bus, WebSphere MQ, DB2, and any operating system costs.
  • Note: The results are specific to the system on from which they have been obtained. If you want to project (or predict) message processing capacity for other systems, you must make a suitable adjustment to allow for differences in the capacity of the two systems.

Performance Results

Typically, as the message size increases, the message rate decreases, and the cost of CPU per message increases.
Persistent MQ messages are written to the MQ log on disk. This causes an overhead in CPU and IO costs and a reduction in message rate. The speed of disk on which the MQ log is configured becomes a key factor. See Tuning for more information.
For details on the measurement environment, see Measurement environment.

When planning a system, it is important to understand the complexities of the processing required so that adequate resources can be provided to meet the requirements of the particular situation.



Aggregation

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 1438.5 99.8 5.6 1076.2 98.3 7.3
2kB 1352.4 99.2 5.9 1024.5 98.6 7.7
20kB 865.5 96.4 8.9 725.4 98.2 10.9
200kB 195.4 99.9 40.9 174.4 99.7 45.7
2000kB 19.7 97.1 394.7 18.8 97.9 416.5
20000kB 1.2 97.8 6576.7 1.2 99.2 6901.3

Coordinated Request/Reply

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 1468.0 99.4 5.4 1255.0 99.1 6.3
2kB 1109.9 99.5 7.2 982.5 99.5 8.1
20kB 391.8 99.6 20.3 370.7 99.5 21.5
200kB 53  100.0 150.8 54.2 100.0 147.4
2000kB 5.4  100.0 1486.7 5.3 99.9 1503.0
20000kB 0.6 100.0 13793.1 0.6 100 14032.2

Large Messaging

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 7498.7 96.8 1.0 4785.9 96.6 1.6
2kB 4995.1 96.6 1.5 3345.2 97.5 2.3
20kB 877.7 96.3 8.8 618.9 97.4 12.6
200kB 92.9 96.2 82.9 71.1 96.1 108.1
2000kB 8.8 96.6 874.7 7.0 95.6 1097.7
20000kB 0.9 96.7 8320.4 0.7 95.5 11234.7

Message Routing

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 16294.6 98.1 0.5 9170.7 97.0 0.8
2kB 15921.7 98.2 0.5 9042.0 96.7 0.9
20kB 14206.6  98.1 0.6 8201.1 95.5 0.9
200kB 4165.3 81.8 1.6 2541.8 73.4 2.3
2000kB 247.5 53.0 17.1 220.2 67.7 24.6
20000kB 20.6 44.8 174.0 20.2 66.1 262.5

Transforming a message

Non Persistent Full Persistence
Msg Size Msg Rate % CPU Busy CPU ms/msg Msg Rate % CPU Busy CPU ms/msg
256b 13110.2 97.7 0.6 7652.6 95.3 1.0
2kB 8061.1 96.0 1.0 5410.9 95.9 1.4
20kB 1779.8 100.0 4.5 1564.3 99.5 5.1
200kB 188.5 100.0 42.5 184.6 99.8 43.3
2000kB 19.1 100.0 418.6 18.2 100.0 438.6
20000kB 1.9 100.0 4232.8 1.8 100.0 4394.6

File out and file in

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 1090.6 96.7 7.1
2kB 1061.3 96.9 7.3
20kB 1033.0 97.4 7.5
200kB 803.2 95.9 9.6
2000kB 138.7 99.7 57.5
20000kB 14.5 99.4 549.5

SOAP Consumer

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3320.8 98.6 2.4
2kB 2666.4 98.3 3.0
20kB 1156.2 98.0 6.8
200kB 181.8 98.6 43.4
2000kB 17.8 98.8 448.1
20000kB 1.7 98.5 4608.1

SOAP Provider

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 7776.7 100.0 1.0
2kB 6208.0 100.0 1.3
20kB 2444.6 100.0 3.3
200kB 373.1 100.0 21.4
2000kB 35.7 100.0  224.2
20000kB 3.5 99.1 2290.7

ISO 8583 Transformation

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
120B 3156.4 97.3 2.5
134B 2523.5 100.0 3.2
1384B 1653.3 99.4  4.8

RESTful API – Post

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3483.2 97.1 2.2
2kB 2150.2 99.2 3.7
20kB 500.6 99.7 15.9

RESTful API – Get

Non Persistent
Msg Size Msg Rate % CPU Busy CPU ms/msg
256b 3855.9 96.4 2.0
2kB 2843.6 97.1  2.7
20kB 810.8 98.2 9.7

Measurement Environment

All throughput measurements where taken on a single server machine. The client type and machine on which they ran varied with the test. The details are given below.

Server Machine

The hardware consisted of:

  • IBM xSeries x3850 X6 with 1 x¬†Intel(R) Xeon(R) CPU E7-4820 v2
  • 2.00GHz processors with HyperThreading turned off
  • ServeRAID M5210 ¬†SAS/SATA Controller with¬†4GB Flash/RAID 5 Upgrade option (47C8668)
  • 136GB 15K 6.0Gbps SFF Serial SCSI / SAS Hard Drive – ST9146853SS x2 (mounted directly)
  • IBM 120GB 2.5in G3HS SATA MLC Enterprise Value SSD – 00AJ395 – x2 (Configured in RAID0)
  • IBM 200GB SAS 2.5in MLC SS Enterprise SSD – 49Y6144¬†– x2 (Configured in RAID0)
  • 32 GB RAM
  • Emulex Dual Port 10GbE SFP+ VFA IIIr

The software consisted of:

  • Microsoft Windows Server 20012 R2
  • WebSphere MQ V7.5.0.5
  • IBM Integration Bus V10.0.0.2
  • DB2 v10.5.500.107

Client Machine

The hardware consisted of:

  • IBM Flex System x240 Compute Node (E2-2630)
  • 2.30GHz processors with HyperThreading turned on
  • 136GB 15K 6.0Gbps SFF Serial SCSI / SAS Hard Drive – ST9146853SS x2 (mounted directly)
  • 56 GB RAM
  • Emulex 10GbE Virtual Fabric Adapters

The software consisted of:

  • Microsoft Windows Server 2012 R2
  • WebSphere MQ V7.5.0.1
  • IBM Java v1.7.0

Network Configuration

The client and server machines were connected using a full duplex 10 Gigabit Ethernet LAN with a single hub.

3 comments on"Windows Performance Report Results"

  1. Hugh Everett April 27, 2016

    Please advise: for the server machine (IBM xSeries x3850 X6 with 1 x Intel(R) Xeon(R) CPU E7-4820 v2) – how many cores per socket were configured ? Essentially, I’d like to know how many cores of this processor were running the workloads.

    • This machine was configured with a single Intel(R) Xeon(R) CPU E7-4820 v2 i.e. 1 socket, 8 cores

  2. Thank you guys, surprisingly in most cases Windows looks better than Linux on the same hardware.

Join The Discussion

Your email address will not be published. Required fields are marked *