IBM Support

IBM App Connect Enterprise V11 performance reports

General Page

Results of a competitive performance evaluation of IBM App Connect Enterprise Version 11.0.0.6 against IBM Integration Bus Version 10.0.0.18 in traditional non-virtualized environments on Linux, Windows, and AIX.

Author:  
First published on November 13, 2019 in IBM® Developer / Integration


The following report presents results of a competitive performance evaluation of IBM App Connect Enterprise Version 11.0.0.6 against IBM Integration Bus Version 10.0.0.18 in traditional non-virtualized environments on Windows and Linux® as well as in an AIX® 7.2 logical partition on Power 8.

Before using performance information, be sure to read the general information under Notices.

Attention
  

Notices

The information provided in the performance report pages illustrates the key processing characteristics of IBM App Connect Enterprise. It is intended for architects, systems programmers, analysts, and programmers wanting to understand the performance characteristics of IBM App Connect Enterprise. The data provided will assist you with sizing solutions. Please note that it is assumed that the reader is familiar with the concepts and operation of IBM App Connect Enterprise.

This information has been obtained by measuring the message throughput for a number of different types of message processing. The term ‘message’ is used in a generic sense, and can mean any request or response into or out of an integration server, regardless of the transport or protocol.

The performance data presented in the reports was measured in a controlled environment and any results obtained in other environments might vary significantly. For more details on the measurement methodologies and environments used, see the "Evaluation Methodology" and "Test Environment" sections, respectively, of this document.

The performance measurements focus on the throughput capabilities of the integration server using different message formats and processing node types. The aim of the measurements is to help you understand the rate at which messages can be processed in different situations as well as to understand the relative costs of the different node types and approaches to message processing.

You should not attempt to make any direct comparisons of the test results in this report to what may appear to be similar tests in previous performance reports. This is because the contents of the test messages are significantly different as is the processing in the tests. In many cases the hardware, operating system, and prerequisite software are also different, making any direct comparison invalid.

In many of the tests the user logic is minimal and the results represent the best throughput that can be achieved for that node type. This should be borne in mind when sizing IBM App Connect Enterprise.

References to IBM products or programs do not imply that IBM intends to make these available in all countries in which IBM operates. Information contained in this report has not been submitted to any formal IBM test and is distributed ‘as is’. The use of this information and the implementation of any of the techniques is the responsibility of the customer. Much depends on the ability of the customer to evaluate this data and project the results to their operational environment.

Contents

Evaluation Methodology

We use the open source PerfHarness tool with point-to-point messaging over HTTP/TCP or MQ/TCP to evaluate the performance of the software systems under test. Regardless of the transport protocol used, a number of PerfHarness client threads simultaneously send prefabricated payload messages to the system under test, wait for and get a response, and keep repeating this synchronous request-response interaction with the same payload for the duration of the test. We refer to the collective of all request-response interactions that have taken place during this predefined time interval as a test run.

HTTP

In accordance with the above, we use HTTP synchronously and do not pipeline requests in a connection. The connections are kept open for a predefined number of request-response interactions as allowed by the HTTP persistent connections scheme. Once the predefined number is reached, the connection is closed and a new one is opened in its place. This process is repeated on each connection until the predefined length of the test runs out, at which point all connections are closed regardless of how many requests have been sent over them, but not before a reply has been received for the request currently in flight over the connection if there is one - and there can only be at most one as we don’t pipeline requests.

MQ

When using MQ, PerfHarness client threads put prefabricated MQ payload messages to a queue designated as an input to the test case, synchronously wait for and get a response from the designated output queue, and keep repeating the put-get cycle for the duration of the test. As is the case with HTTP, we wait for a response for all in-flight requests before ending a test run. Some tests generate messages on other queues in addition to the output queue; in such cases we use additional PerfHarness clients to remove messages from these extraneous queues to avoid them filling up and blocking the test run.

Reporting Performance Metrics

During a test run, we count the complete request-response interactions that have taken place between client and server. We compute the average message rate for the run by dividing this figure by the length of the run. Though we do not verify the content of response messages received from the server, we discard the results of the entire test run if any errors occur in the transport layer.

We monitor CPU utilization on the server using OS-provided facilities such as the output of /bin/ps and information in the /proc filesystem on Linux and AIX, or Windows Management Instrumentation on Windows with the aim to provide estimates of the processing power consumed per message. The CPU time consumed by the integration server divided by the number of request-response interactions yields the average CPU cost of processing a message over a test run.

We restrict the CPU capacity of the test server available to the software system under test using OS-provided facilities such as CPU affinity masks on Linux and Windows, or Workload Manager on AIX.

Though we report average values over test runs of several minutes, there is no way to guarantee the accuracy of these figures or to account for potential differences in the reporting on different operating systems.

We ultimately consider the following outputs from a test run:

  • Number of messages processed
  • CPU time consumed by the integration server process

For each test case and combination of message size and concurrency, we conduct at least 50 test runs each, and report the medians over the set of runs of the following two metrics:

  • Message rate average over the test run
  • Average CPU cost per message over the test run

Test Environment

We have used test environments in the following three system configurations:

  • Red Hat Enterprise Linux Server 7.7 on IBM System x3650 M3, 2x Intel Xeon X5660 CPUs (6 cores each, 12 in total) @ 2.8 GHz, hyperthreading disabled, 32 GB RAM
  • Windows Server 2016 Standard on IBM System x3650 M3, 2x Intel Xeon X5660 CPUs (6 cores each, 12 in total) @ 2.8 GHz, hyperthreading disabled, 32 GB RAM
  • IBM AIX 7.2 in an LPAR configured with 8 processors @ 4157 MHz and 4-way hyperthreading (32 logical cores in total), 122 GB RAM, on IBM Power System S822 (8284-22A)

We compare the performance of the following two software products, testing them on each of the above configurations:

Optimization and Tuning

Generally, we aim to test the products in their default configurations unless they are for some reason not suitable for the test. There is one concurrency-related setting on message flows that we change from its default value.

With the goal of efficiently using the available CPU capacity, we set the "Additional instances" property of message flows under the Workload Management group of configuration options, in line with the concurrency requirements of the test being executed. For each test, we set the total number of message flow instances to three times the number of "virtual" CPU cores configured on the server container. By "virtual" CPU cores, we mean the CPU capacity that we artificially restrict our server processes to using the OS-level facilities mentioned in the Reporting Performance Metrics page, and they do not necessarily correspond to physical or OS-level logical CPU cores, or any other formal CPU count. For our tests, this has proven to be a reasonable compromise between excessive context switching overhead and having too few threads to serve the inbound workload.

Test Scenarios

We have designed a number of test cases, described in child pages of this page, to evaluate the performance of the various functional sub-units of the products under test. We perform all test runs in 3 different concurrency configurations, using 3 different message sizes in each (9 configurations altogether for each test scenario):

  • 1, 2, and 4 virtual CPU cores constrained by operating system-level resource limiting capabilities such as CPU affinity masks on Windows and Linux, and Workload Manager on AIX. The constraints are applied to all Integration Bus, App Connect Enterprise, and MQ processes.
  • 2k, 20k, and 200k request messages in the format appropriate for the test case, XML or JSON. The message sizes represent the size of the input messages in the XML format. As the JSON input messages essentially represent the same data as their XML counterparts, they are significantly smaller (approximately 1.3k, 13k, and 130k). Nevertheless, for the sake of simplicity, we will refer to them using the XML message sizes. We also note that we use different sets of XML messages for SOAP interactions, MQ interactions, and raw HTTP-based interactions and that the sizes of the messages vary by use case. We also note that the smallest test message we use for SOAP testing, while still treated as a 2K message for the purposes of reporting the results, is actually closer to 4K in size.

Furthermore, all test runs are subject to the following parameters:

  • Test runs are 150 seconds long.
  • The client runs 4 times as many client threads (and hence, simultaneous MQ/TCP or HTTP/TCP client connections) as the number of virtual server cores. We set the number of flow instances on the server to 3 times the number of virtual server cores.
  • In IBM Integration Bus Version 10.0.0.18, we create a single Integration Node with a single Integration Server, and deploy a pre-packaged BAR file containing a single message flow with its supporting artifacts. In App Connect Enterprise Version 11.0.0.6, we deploy a similar BAR file to a Standalone Integration Server for the purpose of the test. Although we test the two products using the same set of deployable artifacts, we rebuild each of them using the Integration Toolkit that ships with the product.

The following applies to HTTP-based tests only:

  • The client sends at most 5000 requests per persistent HTTP connection.

E.g., for a test run using 4 cores, PerfHarness will run 16 client threads, each maintaining and recycling as appropriate a single persistent HTTP connection throughout the duration of the test run, against a server running 12 message flow instances (Additional Instances property set to 11).

We also provide a project interchange file perf_apps_pi.zip of the above test artefacts as well as the complete set of test messages messages.zip used in this report for reference.

HTTP BLOB Echo

Consisting only of an HTTP Input - HTTP Reply node pair in a single message flow, the HTTP BLOB Echo scenario tests the performance of the HTTP transport layer by echoing the HTTP request body received through the HTTP Input node back as the response through the HTTP Reply node. To minimize the processing overhead on top of what is required for the HTTP transport itself, messages are processed using the BLOB parser.

Test scenario, consisting only of an HTTP Input - HTTP Reply node pair in a single message flow

HTTP XMLNSC Echo

Similarly to the HTTP BLOB Echo scenario, the HTTP XMLNSC Echo test makes use of a single message flow with an HTTP Input - HTTP Reply node pair to echo a request message back to the client untransformed; however, it uses the XMLNSC parser with the "Parse timing" configuration option set to "Complete" in order to evaluate the performance of the XMLNSC parser. As the cost of the HTTP transport is also included in the results from this test, they should be viewed in comparison to the HTTP BLOB Echo results.

Test scenario, consisting only of an HTTP Input - HTTP Reply node pair in a single message flow

HTTP XMLNSC ESQL Transformation

Manipulating messages in the XMLNSC domain, the HTTP XMLNSC ESQL Transformation test case adds an ESQL compute node to the data flow of the HTTP XMLNSC Echo test to perform a simple operation on the message body that requires the entire input message to be parsed, adding up numeric values scattered throughout and writing the sum in the XMLNSC response message. This test case evaluates the performance of a combination of HTTP transport, parsing, serialization, and ESQL transformation in the XMLNSC domain. Its results are most useful in knowledge of the results of the previous two scenarios.

HTTP XMLNSC ESQL Transformation test case adds an ESQL compute node to the data flow of the HTTP XMLNSC Echo test

HTTP XMLNSC Mapper Transformation

The HTTP XMLNSC Mapper Transformation test case is broadly similar to the HTTP XMLNSC ESQL Transformation scenario, however, it uses a Mapping Node to transform the input message, performing simple arithmetics on an XML array therein and producing an XML response with the results. The test case exercises HTTP transport, parsing, serialization, and message transformation using graphical mapping in the XMLNSC domain.

HTTP XMLNSC Mapper Transformation test case uses a Mapping Node to transform the input message

REST Echo

The REST Echo test case tests the HTTP transport layer as well as other basic functionality associated with REST APIs such as the JSON parser and the REST router. The ESQL compute node appearing in the implementation of the only REST operation of the API performs a message copy from the request to the response. Furthermore, in order to force a full message parse without having to change the default parse timing setting on the HTTP Input node embedded in the REST API implementation, it modifies the last element of the request message.

The REST Echo test case tests the HTTP transport layer as well as other basic functionality associated with REST APIs

SOAP Echo

The SOAP Echo test case exercises the HTTP transport layer as well as the SOAP implementation. The "Parse timing" configuration option is set to "Complete" on the SOAP Input node, causing the entire input message to be parsed so as to measure the performance of SOAP parsing and serialization. The SOAP Reply node in the only message flow of the test case is preceded by an ESQL transformation node whose sole function is to transfer the message body from the SOAP request envelope in the inbound message to the SOAP response envelope in the reply.

The SOAP Echo test case exercises the HTTP transport layer as well as the SOAP implementation

MQ Coordinated Request-Reply

The MQ Coordinated Request-Reply test case, based closely on the Coordinated Request Reply WebSphere MQ sample, consists of three message flows; it exercises the MQ Input, MQ Output, MQ Get, and MQ Reply nodes, manipulates MQMD headers, and performs transformation and parsing in the XMLNSC and MRM domains.

The Request flow (Fig. 1) handles XML messages arriving at a designated input queue, saving the original MQMD header to a designated store queue, transforming the payload to a different representation, and changing the ReplyToQ field in the MQMD header to a designated internal back-end reply queue before forwarding the message to the internal back-end request queue.

The request flow
Figure 1: The Request flow

The Back-End flow (Fig. 2) receives the message transformed by the Request flow via its internal input queue, adds a timestamp to the message body and places the result on an internal queue referred to by the now updated ReplyToQ field of the MQMD header.

The Back-End flow
Figure 2: The Back-End flow

Finally, the Reply flow (Fig. 3) transforms the reply generated by the Back-End flow back to its original XML-based representation, restores its original MQMD header from the internal store queue, and returns the result to the Reply-To Queue designated in the original request sent by the external client.

The Reply flow
Figure 3: The Reply flow

Note that we have slightly modified the original sample to better fit PerfHarness’s message correlation model, hence the additional SetMQMDReport compute node in the Reply flow, and an additional ESQL statement in the compute node of the Back-End flow. These additional statements update the Report field of the MQMD header to preserve both the Message Id and the Correlation Id.

MQ Large Messaging

The MQ Large Messaging test case is based closely on the Large Messaging sample. It performs message transformation in the XMLNSC domain using an ESQL compute node, interacting with the external world via MQ. Input messages contain a repeating XML structure, whose individual elements are extracted and are forwarded as separate MQ messages to a designated output queue.

The MQ Large Messaging test performs message transformation in the XMLNSC domain using an ESQL compute node

For the purposes of performance testing, we use two PerfHarness instances: a primary instance to send the input message to the application’s input queue (handled by the MessageWithRepeatingElements node) and wait for an indication (placed by the MessageSlicingComplete node) on another queue that the message has been completely processed, and a secondary instance to retrieve the resulting message slices (output by the RepeatedElementSlices node) from another queue. This way the primary PerfHarness instance can maintain a one-to-one correspondence between requests and responses and produce a message throughput metric suitable for evaluating and comparing system performance.

In line with the other tests, the input messages we use are approximately 2K, 20K, and 200K in size and contain 2, 20, and 200 records each, respectively. This means that, e.g., for the 200K message, 200 slices will be placed on the output queue for each input message, resulting in a message throughput on the slice output queue of 200 times the throughput on the request-response queue pair. In this document, we report the message throughput on the request-response queues as seen by the primary PerfHarness instance.

MQ Routing Cache

The MQ Routing Cache test case, based closely on the Message Routing sample, exercises a dynamic routing scenario where a field from the input message is extracted and used to look up the name of a destination queue in an IBM DB2 database, which is then cached in an ESQL SHARED ROW variable and used as a destination for the current and subsequent messages. The test exercises the MQ-based transport, XMLNSC parsing, and dynamic routing.

The MQ Routing Cache test case exercises a dynamic routing scenario

The message flow in the figure contains two branches: the first branch performs the above dynamic routing scenario that is being tested for performance, whereas the second branch provides capability to clear the cache and mandate a destination look-up for the next message; however, this latter branch is not exercised in the test.

Note that as the result of the database lookup is cached, only one is performed per test run. As stated in the Test Scenarios section, we collect CPU usage statistics for Integration Bus, App Connect Enterprise, and MQ processes, but not for database processes. However, as there is only one database lookup per test run, database usage is negligible compared to other processing in the test, and therefore need not be measured.

Results

Find here the numerical results of the tests for the three platforms they were run on.

Linux

Find here the numerical results of the tests for the nine test scenarios on Linux.

HTTP BLOB Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 4236.87/s 235.89 μs 4476.48/s 223.91 μs
2 6348.31/s 240.73 μs 9318.92/s 213.96 μs
4 14503.73/s 236.79 μs 19538.11/s 201.34 μs
20k 1 3089.81/s 323.44 μs 3546.16/s 282.67 μs
2 4639.06/s 350.62 μs 7089.50/s 280.56 μs
4 8859.00/s 371.36 μs 14119.52/s 274.92 μs
200k 1 838.80/s 1.19 ms 1116.17/s 898.56 μs
2 1305.21/s 1.39 ms 2186.42/s 916.13 μs
4 1811.01/s 2.09 ms 3490.88/s 1.12 ms

HTTP XMLNSC Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 2894.49/s 345.33 μs 2966.07/s 338.04 μs
2 4168.60/s 358.78 μs 6105.01/s 326.47 μs
4 9442.47/s 350.18 μs 12527.29/s 311.63 μs
20k 1 1050.03/s 951.83 μs 1108.63/s 904.34 μs
2 1651.64/s 973.59 μs 2283.94/s 874.64 μs
4 3386.04/s 991.1 μs 4585.59/s 862.44 μs
200k 1 131.56/s 7.6 ms 138.50/s 7.24 ms
2 258.48/s 7.7 ms 275.30/s 7.27 ms
4 473.29/s 8.37 ms 527.60/s 7.59 ms

HTTP XMLNSC ESQL Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1554.81/s 642.95 μs 1622.06/s 618.13 μs
2 2298.97/s 660.34 μs 3340.78/s 598.28 μs
4 4988.52/s 655.34 μs 6768.90/s 578.98 μs
20k 1 327.87/s 3.05 ms 346.03/s 2.9 ms
2 657.29/s 3.01 ms 711.58/s 2.81 ms
4 1271.29/s 3.05 ms 1430.91/s 2.79 ms
200k 1 34.94/s 28.62 ms 36.57/s 27.42 ms
2 69.79/s 28.61 ms 73.45/s 27.26 ms
4 136.66/s 29.2 ms 145.55/s 27.5 ms

HTTP XMLNSC Mapper Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 753.94/s 1.33 ms 771.15/s 1.3 ms
2 951.88/s 1.33 ms 968.41/s 1.31 ms
4 964.28/s 1.38 ms 889.28/s 1.32 ms
20k 1 124.71/s 8.05 ms 126.15/s 7.97 ms
2 156.32/s 7.54 ms 154.60/s 7.59 ms
4 158.62/s 7.85 ms 158.98/s 7.74 ms
200k 1 12.92/s 77.64 ms 13.23/s 75.89 ms
2 15.38/s 75.05 ms 15.60/s 74.19 ms
4 15.90/s 77.86 ms 16.22/s 76.51 ms

REST Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1762.10/s 567.28 μs 2035.95/s 492.35 μs
2 2569.61/s 590.05 μs 4198.67/s 475.38 μs
4 5622.67/s 578.75 μs 8540.39/s 459.14 μs
20k 1 837.59/s 1.19 ms 955.01/s 1.05 ms
2 1438.00/s 1.2 ms 1964.83/s 1.02 ms
4 2829.42/s 1.21 ms 3937.55/s 1 ms
200k 1 134.01/s 7.47 ms 146.38/s 6.85 ms
2 268.57/s 7.49 ms 295.48/s 6.78 ms
4 489.50/s 8.11 ms 551.60/s 7.26 ms

SOAP Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 954.56/s 1.05 ms 1016.32/s 991.86 μs
2 1583.90/s 1.03 ms 2064.48/s 967.59 μs
4 3415.88/s 980.41 μs 4307.76/s 917.68 μs
20k 1 325.17/s 3.08 ms 331.75/s 3.02 ms
2 663.31/s 2.98 ms 692.62/s 2.89 ms
4 1315.13/s 2.92 ms 1398.14/s 2.84 ms
200k 1 40.36/s 24.78 ms 39.41/s 25.46 ms
2 81.35/s 24.53 ms 80.71/s 24.78 ms
4 159.54/s 24.95 ms 158.94/s 25.08 ms

MQ Coordinated Request-Reply

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 224.84/s 4.5 ms 245.75/s 4.08 ms
2 489.79/s 4.12 ms 499.97/s 4.04 ms
4 959.64/s 4.16 ms 979.70/s 4.05 ms
20k 1 43.30/s 23.36 ms 49.60/s 20.3 ms
2 107.26/s 20.04 ms 105.20/s 19.53 ms
4 206.36/s 19.44 ms 212.17/s 18.89 ms
200k 1 4.95/s 204.88 ms 5.69/s 176.83 ms
2 10.98/s 183.02 ms 11.38/s 178.43 ms
4 22.46/s 177.06 ms 23.18/s 171.43 ms

MQ Large Messaging

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 798.57/s 1.25 ms 807.71/s 1.24 ms
2 1620.50/s 1.24 ms 1638.32/s 1.22 ms
4 3249.03/s 1.24 ms 3283.13/s 1.22 ms
20k 1 135.25/s 7.4 ms 137.41/s 7.29 ms
2 276.36/s 7.26 ms 280.42/s 7.16 ms
4 563.02/s 7.16 ms 571.29/s 7.05 ms
200k 1 14.21/s 70.46 ms 14.57/s 68.82 ms
2 29.40/s 68.29 ms 30.01/s 66.93 ms
4 59.86/s 67.32 ms 61.12/s 65.95 ms

MQ Routing Cache

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 2257.53/s 438.33 μs 2236.65/s 442.63 μs
2 4546.04/s 411.02 μs 4530.60/s 413.74 μs
4 9225.21/s 420.38 μs 9185.20/s 422.72 μs
20k 1 2094.49/s 472.17 μs 2074.37/s 477.06 μs
2 4010.86/s 458.42 μs 3994.44/s 461.85 μs
4 8044.17/s 477.07 μs 8001.30/s 480.41 μs
200k 1 1194.17/s 814.9 μs 1194.81/s 816.17 μs
2 1517.69/s 1.06 ms 1513.09/s 1.07 ms
4 2101.67/s 1.69 ms 2093.91/s 1.69 ms

Windows

Find here the numerical results of the tests for the nine test scenarios on Windows.

HTTP BLOB Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 3292.11/s 302.22 μs 3867.62/s 257.08 μs
2 7115.82/s 277.32 μs 7774.15/s 254.38 μs
4 13515.40/s 282.77 μs 17142.53/s 230.77 μs
20k 1 2026.99/s 494.68 μs 2388.42/s 430.98 μs
2 4384.78/s 448.04 μs 5192.91/s 383.98 μs
4 7836.65/s 454.49 μs 8108.81/s 314.41 μs
200k 1 513.49/s 1.93 ms 603.56/s 1.64 ms
2 887.76/s 1.78 ms 1189.13/s 1.28 ms
4 915.66/s 1.79 ms 963.64/s 1.46 ms

HTTP XMLNSC Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 2439.40/s 407.52 μs 2684.48/s 371.04 μs
2 5203.16/s 380.52 μs 5393.53/s 368.56 μs
4 9769.11/s 391.45 μs 11103.45/s 356.7 μs
20k 1 938.51/s 1.06 ms 968.55/s 1.01 ms
2 1900.24/s 1.05 ms 2014.24/s 992.4 μs
4 3652.28/s 1.07 ms 4056.33/s 981.01 μs
200k 1 117.78/s 8.51 ms 130.57/s 7.63 ms
2 239.45/s 8.32 ms 263.85/s 7.59 ms
4 476.96/s 8.31 ms 524.16/s 7.62 ms

HTTP XMLNSC ESQL Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1404.73/s 708.68 μs 1531.01/s 651.14 μs
2 2915.23/s 682.23 μs 3079.58/s 646.48 μs
4 5674.61/s 694.23 μs 6156.48/s 645.11 μs
20k 1 318.62/s 3.13 ms 351.59/s 2.83 ms
2 645.77/s 3.09 ms 707.81/s 2.81 ms
4 1272.70/s 3.13 ms 1411.38/s 2.82 ms
200k 1 34.13/s 29.29 ms 39.51/s 25.41 ms
2 67.50/s 29.57 ms 77.93/s 25.59 ms
4 134.19/s 29.65 ms 151.72/s 26.29 ms

HTTP XMLNSC Mapper Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 700.27/s 1.43 ms 705.60/s 1.42 ms
2 790.30/s 1.51 ms 867.95/s 1.37 ms
4 811.30/s 1.52 ms 866.25/s 1.41 ms
20k 1 118.11/s 8.47 ms 116.02/s 8.62 ms
2 129.36/s 8.88 ms 135.88/s 8.27 ms
4 131.73/s 8.89 ms 137.07/s 8.34 ms
200k 1 11.21/s 89.5 ms 12.05/s 82.95 ms
2 13.41/s 85.14 ms 13.98/s 79.28 ms
4 13.45/s 87 ms 13.98/s 82.18 ms

REST Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1650.54/s 604.19 μs 2012.03/s 495.34 μs
2 3426.63/s 580.06 μs 3973.58/s 501.37 μs
4 6643.99/s 588.87 μs 7994.75/s 497.47 μs
20k 1 753.51/s 1.32 ms 931.05/s 1.06 ms
2 1531.34/s 1.3 ms 1868.70/s 1.06 ms
4 2983.07/s 1.32 ms 3703.28/s 1.07 ms
200k 1 116.34/s 8.6 ms 144.71/s 6.89 ms
2 230.43/s 8.64 ms 286.44/s 6.97 ms
4 442.02/s 9 ms 538.69/s 7.39 ms

SOAP Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 956.25/s 1.04 ms 983.45/s 1.01 ms
2 1952.16/s 1.02 ms 1864.84/s 1.07 ms
4 3834.38/s 1.03 ms 3731.10/s 1.06 ms
20k 1 352.14/s 2.84 ms 361.83/s 2.75 ms
2 712.40/s 2.8 ms 699.97/s 2.84 ms
4 1404.08/s 2.83 ms 1381.24/s 2.88 ms
200k 1 45.53/s 21.94 ms 45.83/s 21.78 ms
2 90.53/s 22.03 ms 90.02/s 22.14 ms
4 176.87/s 22.43 ms 175.63/s 22.71 ms

MQ Coordinated Request-Reply

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 212.63/s 4.71 ms 231.04/s 4.39 ms
2 450.90/s 4.44 ms 465.70/s 4.29 ms
4 908.42/s 4.42 ms 891.68/s 4.5 ms
20k 1 47.70/s 21.01 ms 52.18/s 19.46 ms
2 104.62/s 19.28 ms 108.99/s 18.46 ms
4 214.78/s 18.79 ms 217.80/s 18.51 ms
200k 1 5.74/s 175.14 ms 6.01/s 169.69 ms
2 12.06/s 167.33 ms 12.48/s 161.01 ms
4 24.66/s 163.99 ms 25.28/s 159.83 ms

MQ Large Messaging

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 684.87/s 1.45 ms 676.13/s 1.46 ms
2 1317.43/s 1.5 ms 1278.11/s 1.54 ms
4 2536.05/s 1.55 ms 2461.44/s 1.59 ms
20k 1 118.76/s 8.39 ms 118.23/s 8.41 ms
2 227.66/s 8.72 ms 221.93/s 8.93 ms
4 443.40/s 8.89 ms 427.82/s 9.22 ms
200k 1 12.70/s 78.45 ms 12.64/s 78.69 ms
2 24.44/s 81.34 ms 24.01/s 82.92 ms
4 47.62/s 83.07 ms 46.21/s 85.36 ms

MQ Routing Cache

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1862.99/s 529.47 μs 1830.44/s 540.1 μs
2 4271.42/s 462.2 μs 4094.31/s 483.66 μs
4 8428.43/s 467.88 μs 8009.33/s 493.12 μs
20k 1 1556.91/s 635.1 μs 1567.02/s 631.7 μs
2 3581.34/s 554.01 μs 3375.20/s 583.64 μs
4 7159.95/s 532.67 μs 6814.46/s 570.11 μs
200k 1 616.45/s 1.57 ms 607.48/s 1.6 ms
2 775.67/s 1.49 ms 778.93/s 1.5 ms
4 830.37/s 1.31 ms 830.15/s 1.34 ms

AIX

Find here the numerical results of the tests for the nine test scenarios on AIX.

HTTP BLOB Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 8473.99/s 100.68 μs 8742.37/s 97.85 μs
2 16239.18/s 98.98 μs 17025.79/s 98 μs
4 29648.51/s 107.73 μs 31565.43/s 104.87 μs
20k 1 5944.15/s 143.09 μs 5827.52/s 148.84 μs
2 11398.19/s 138.78 μs 11369.33/s 147.93 μs
4 20273.17/s 156.51 μs 20932.32/s 159.59 μs
200k 1 1627.76/s 521.49 μs 1402.21/s 631.7 μs
2 3004.57/s 522.25 μs 2722.15/s 633.96 μs
4 5950.15/s 597.6 μs 5123.90/s 672.81 μs

HTTP XMLNSC Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 5372.26/s 159.96 μs 5340.62/s 160.93 μs
2 10456.69/s 155.16 μs 10334.34/s 158.49 μs
4 19161.05/s 164.44 μs 19880.65/s 160.57 μs
20k 1 1402.52/s 614.9 μs 1440.70/s 606.09 μs
2 2749.40/s 597.81 μs 2800.05/s 584.63 μs
4 5591.57/s 573.55 μs 5670.73/s 566.06 μs
200k 1 146.97/s 5.83 ms 154.94/s 5.64 ms
2 281.54/s 5.86 ms 298.97/s 5.51 ms
4 605.26/s 5.44 ms 617.48/s 5.23 ms

HTTP XMLNSC ESQL Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 2470.57/s 347.56 μs 2518.64/s 345.71 μs
2 4924.67/s 332.97 μs 4924.53/s 336.93 μs
4 10649.19/s 318.69 μs 10206.56/s 313.39 μs
20k 1 393.13/s 2.17 ms 383.63/s 2.24 ms
2 762.70/s 2.14 ms 756.89/s 2.17 ms
4 1649.43/s 1.95 ms 1659.38/s 1.95 ms
200k 1 39.06/s 21.81 ms 38.36/s 22.35 ms
2 75.60/s 21.77 ms 75.22/s 22.02 ms
4 163.28/s 20.14 ms 165.49/s 19.56 ms

HTTP XMLNSC Mapper Transformation

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1174.19/s 596.43 μs 1162.37/s 600.49 μs
2 1283.56/s 590.63 μs 1282.20/s 587.39 μs
4 1295.59/s 589.39 μs 1298.60/s 585.1 μs
20k 1 184.59/s 3.68 ms 174.77/s 3.9 ms
2 196.55/s 3.69 ms 190.04/s 3.85 ms
4 197.77/s 3.69 ms 192.52/s 3.81 ms
200k 1 18.23/s 37.2 ms 17.27/s 39.2 ms
2 19.27/s 38.47 ms 18.69/s 38.69 ms
4 4.96/s 51.73 ms 18.81/s 38.7 ms

REST Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 2811.10/s 306.03 μs 3909.03/s 218.98 μs
2 5670.97/s 286.57 μs 7736.30/s 211.22 μs
4 11819.24/s 269.72 μs 15507.22/s 205.56 μs
20k 1 1034.25/s 858.18 μs 1330.06/s 647.01 μs
2 2019.86/s 812.25 μs 2646.07/s 616.95 μs
4 4487.73/s 741.65 μs 5648.02/s 572.89 μs
200k 1 132.34/s 6.46 ms 170.46/s 5.08 ms
2 257.11/s 6.41 ms 332.02/s 4.97 ms
4 554.15/s 5.84 ms 698.86/s 4.7 ms

SOAP Echo

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1799.51/s 481.42 μs 1505.63/s 577.74 μs
2 3465.05/s 473.63 μs 2944.28/s 559.04 μs
4 6430.03/s 499.22 μs 5976.50/s 538.2 μs
20k 1 469.15/s 1.85 ms 324.53/s 2.67 ms
2 910.08/s 1.82 ms 651.07/s 2.52 ms
4 1912.26/s 1.69 ms 1406.09/s 2.31 ms
200k 1 51.58/s 16.64 ms 34.28/s 24.98 ms
2 97.32/s 16.82 ms 68.62/s 23.97 ms
4 205.81/s 15.85 ms 145.82/s 22.08 ms

MQ Coordinated Request-Reply

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 386.58/s 2.26 ms 368.81/s 2.3 ms
2 753.48/s 2.1 ms 744.75/s 2.12 ms
4 1464.18/s 2.19 ms 1512.94/s 2.13 ms
20k 1 67.54/s 12.98 ms 63.50/s 13.46 ms
2 134.45/s 11.75 ms 128.88/s 12.18 ms
4 275.15/s 11.65 ms 275.93/s 11.63 ms
200k 1 7.12/s 122.63 ms 6.78/s 127.19 ms
2 14.46/s 112.4 ms 13.89/s 116.55 ms
4 29.68/s 108.36 ms 28.83/s 111.44 ms

MQ Large Messaging

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 1576.47/s 529.18 μs 1344.53/s 623.57 μs
2 2828.71/s 562.99 μs 2510.18/s 636.62 μs
4 5095.42/s 644.64 μs 4655.89/s 702.04 μs
20k 1 255.19/s 3.27 ms 205.34/s 4.08 ms
2 463.08/s 3.48 ms 388.90/s 4.14 ms
4 817.59/s 3.95 ms 725.76/s 4.46 ms
200k 1 26.86/s 31.27 ms 21.62/s 38.91 ms
2 49.03/s 32.85 ms 41.16/s 38.89 ms
4 86.66/s 37.19 ms 76.95/s 42.11 ms

MQ Routing Cache

Test results for IIB 10.0.0.18 - ACE 11.0.0.6

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency IBM Integration Bus 10.0.0.18 IBM App Connect Enterprise 11.0.0.6
Rate CPU cost Rate CPU cost
2k 1 5016.86/s 162.44 μs 4763.76/s 171.19 μs
2 10711.62/s 165.3 μs 9532.47/s 173.85 μs
4 15392.57/s 196.08 μs 14772.47/s 204.44 μs
20k 1 3785.09/s 182.28 μs 3579.64/s 194.23 μs
2 5551.35/s 186.87 μs 5278.23/s 195.91 μs
4 8285.90/s 206.32 μs 8218.99/s 216.16 μs
200k 1 498.15/s 477.55 μs 489.88/s 484.59 μs
2 256.95/s 675.53 μs 259.25/s 688.38 μs
4 835.12/s 686.74 μs 830.52/s 693.06 μs

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

[{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSDR5J","label":"IBM App Connect Enterprise"},"ARM Category":[{"code":"a8m0z000000brDHAAY","label":"ACEv11"},{"code":"a8m0z000000cwQwAAI","label":"ACEv11->Performance"}],"ARM Case Number":"","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"11.0.0","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
15 June 2020

UID

ibm16228762