This section outlines the software components that are used to produce the measurements which are contained in the performance wiki pages.
Point to Point Message Processing is used to test the following transports:

Different configurations are used to generate and consume input and output messages due to different test cases using different types of input and output messages.

If the scenario in question does not require a queue manager the Integration Node has been created without a default queue manager. Only scenarios with an MQ requirement have been created with a default queue manager.


 

Message Generation and Consumption

The Performance Harness for JMS is a multi-threaded WebSphere MQ Client program written in Java that is used as follows:

  • To generate input messages for the different test cases.
  • To consume output messages.

The following PerfHarness modules are used for point to point testing:

  • mqjava.Requestor for MQ Messages
  • http.Requestor for sending SOAP and HTTP messages
  • tcp.Requestor for sending TCPIP messages

Note: The Performance Harness for JMS is used to generate and consume messages. The tool is useful as a simple way to send and receive messages. The documentation for the tool contains examples of how to run it to send and receive messages. More information about the currently available version can be found at AlphaWorks .

MQ Transport

  • Both persistent and non persistent MQ messages are generated using the Performance Harness program.
  • Persistent messages are sent as part of a transaction which is committed after every message.
  • A number of threads are run in the multi-threaded client to ensure that there are always messages on the input queue waiting to be processed. This is important when measuring message throughput.
  • Each thread sends a message and then waits to receive a reply on the output queue.
  • Any thread within the client program is able to retrieve any message which has been processed by a message flow.
  • No use is made of the WebSphere MQ correlation identifiers to limit consumption of a message to the thread which created it.
  • As soon as a thread receives a reply, it sends another message.
  • The message content is the same for all threads and all messages.

SOAP and HTTP transport

  • SOAP and HTTP messages are generated using the Performance Harness program.
  • Messages are sent through persistent HTTP connections. This means that each thread reuses the same TCPIP socket for each request. Each client thread has its own TCPIP socket connection to send and receive data.
  • When a thread receives a reply, it sends another message.
  • The message content is the same for all threads and all messages.

TCPIP transport

  • TCPIP messages are generated using the Performance Harness program.
  • Messages are sent through persistent HTTP connections. This means that each thread reuses the same TCPIP socket for each request. Each client thread has its own TCPIP socket connection to send and receive data.
  • When a thread receives a reply, it sends another message.
  • The message content is the same for all threads and all messages.

Back to top


Machine Configuration

  • The Performance Harness for JMS used to generate and consume messages for the message flows runs on a single client machine.
  • IBM Integration Bus v10, its dedicated WebSphere MQ queue manager, and the database are all located on the server machine.
  • There is a single client machine.
  • For MQ performance tests, messages are transmitted from the client machine to the server machine over WebSphere MQ SVRCONN channels. The messages are received on the server by using the WebSphere MQ queue manager listener process.
  • Messages are transmitted from the client machines to the server machine by using the WebSphere MQ transport, SOAP/HTTP depending on the test use case.
  • The client and the server machine are configured with sufficient memory to ensure that no paging takes place during the performance tests.

Back to top


Reported Message Rates

  • The message rates reported are the number of invocations of the message flow per second.
  • For tests involving several message flows, such as the message aggregation tests, the rate reported is the number of complete operations or aggregations per second. Fan-out and fan-in processing is counted as one operation rather than separate operations.
  • The message rates quoted are an average taken over the measurement period. The start time used correspond to the time when the system initialization period has completed.

Back to top


3 comments on"Evaluation method"

  1. Timm Bryant April 10, 2018

    I’m interested in knowing the performance of IIB Standard Edition. Do you know if any of the tested configurations were run with a single Integration Server configured?

  2. martin.ross March 23, 2016

    For the test case results we utilise rates reported during the run, and apply further statistical tests and calculations to the data set to normalise the data, adjust for warm-up times and identify any anomalies during the run. If you want to use the BasicStats reporting module on JMSPerfHarness to test IIB I would strongly recommend setting the “sw” flag appropriately to define a warm-up period – allowing the test client and system under test to be fully initialised, the totalRate in the final summary will then be calculated from the end of the warm-up period.

    The Perfharness code is now available on GitHub (https://github.com/ot4i/perf-harness) and is easily extensible to provide your own custom statistics reporting modules (or extend BasicStats or others) if there is any additional information / data that would be useful for your testing.

  3. saurabh25281 December 10, 2015

    Hi,

    I am trying to test out the performance of my IIB server using the performance harness. I would like to undestand, if the Message Rates in the test case results same as totalRate (which is the average Transactions Per Second across the run) in the test case console output.

    Regards
    Saurabh

Join The Discussion

Your email address will not be published. Required fields are marked *