We have designed a number of test cases, described in child pages of this page, to evaluate the performance of the various functional sub-units of the products under test. We perform all test runs in 3 different concurrency configurations, using 3 different message sizes in each (9 configurations altogether for each test scenario):

  • 1, 2, and 4 virtual CPU cores constrained by operating system-level resource limiting capabilities such as CPU affinity masks on Windows and Linux, and Workload Manager on AIX. The constraints are applied to all Integration Bus, App Connect Enterprise, and MQ processes.
  • 2k, 20k, and 200k request messages in the format appropriate for the test case, XML or JSON. The message sizes represent the size of the input messages in the XML format. As the JSON input messages essentially represent the same data as their XML counterparts, they are significantly smaller (approximately 1.3k, 13k, and 130k). Nevertheless, for the sake of simplicity, we will refer to them using the XML message sizes. We also note that we use different sets of XML messages for SOAP interactions, MQ interactions, and raw HTTP-based interactions and that the sizes of the messages vary by use case. We also note that the smallest test message we use for SOAP testing, while still treated as a 2K message for the purposes of reporting the results, is actually closer to 4K in size.

Furthermore, all test runs are subject to the following parameters:

  • Test runs are 150 seconds long.
  • The client runs 4 times as many client threads (and hence, simultaneous MQ/TCP or HTTP/TCP client connections) as the number of virtual server cores. We set the number of flow instances on the server to 3 times the number of virtual server cores.
  • In IBM Integration Bus Version, we create a single Integration Node with a single Integration Server, and deploy a pre-packaged BAR file containing a single message flow with its supporting artifacts. In App Connect Enterprise Version, we deploy a similar BAR file to a Standalone Integration Server for the purpose of the test. Although we test the two products using the same set of deployable artifacts, we rebuild each of them using the Integration Toolkit that ships with the product.

The following applies to HTTP-based tests only:

  • The client sends at most 5000 requests per persistent HTTP connection.

E.g., for a test run using 4 cores, PerfHarness will run 16 client threads, each maintaining and recycling as appropriate a single persistent HTTP connection throughout the duration of the test run, against a server running 12 message flow instances (Additional Instances property set to 11).

We also provide a PI file of the above test artefacts as well as the complete set of test messages used in this report for reference.

Join The Discussion

Your email address will not be published. Required fields are marked *