IBM Support

IBM App Connect Enterprise V11 container performance reports

General Page

Results of a competitive performance evaluation of IBM App Connect Enterprise Version 11.0.0.4 against IBM Integration Bus Version 10.0.0.16 in Docker container environments on Linux and Windows.

Author:  
First published on 31 May 2019 in IBM® Developer / Integration


In the following, we present results of competitive performance evaluation of IBM App Connect Enterprise Version 11.0.0.4 against IBM Integration Bus Version 10.0.0.16 in Docker container environments on Windows and Linux®. There are numerous advantages of using containers for running integration workloads, including but not limited to ease of administration and improved resource use.  For the advantages there is considerable interest in the behaviour of the previously mentioned products in container environments. This report aims to show differences in performance while using state-of-the-art virtualization technology.

Before using performance information, be sure to read the general information under Notices.

Attention
  

Notices

The information provided in the performance report pages illustrates the key processing characteristics of IBM App Connect Enterprise. It is intended for architects, systems programmers, analysts, and programmers wanting to understand the performance characteristics of IBM App Connect Enterprise. The data provided will assist you with sizing solutions. Note that it is assumed that the reader is familiar with the concepts and operation of IBM App Connect Enterprise.

This information was obtained by measuring the message throughput for a number of different types of message processing. The term ‘message’ is used in a generic sense, and can mean any request or response into or out of an integration server, regardless of the transport or protocol.

The performance data presented in the reports was measured in a controlled environment and any results obtained in other environments might vary significantly. For more details on the measurement methodologies and environments used, see the Evaluation Methodology and Test Environment sections of this document.

The performance measurements focus on the throughput capabilities of the integration server that uses different message formats and processing node types. The aim of the measurements is to help you understand the following aspects:

  • The rate at which messages can be processed in different situations
  • The relative costs of the different node types and approaches to message processing.

Direct comparisons of the test results in this report are not possible to what might appear to be similar tests in previous performance reports, for several reasons:

  • The contents of the test messages are significantly different as is the processing in the tests.
  • In many cases the hardware, operating system, and prerequisite software are also different, making any direct comparison invalid.

In many of the tests, the user logic is minimal and the results represent the best throughput that can be achieved for that node type. Consider this aspect when sizing IBM App Connect Enterprise.

References to IBM products or programs do not imply that IBM intends to make these available in all countries in which IBM operates. Information contained in this report has not been submitted to any formal IBM test and is distributed ‘as is’. The use of this information and the implementation of any of the techniques is the responsibility of the customer. Much depends on the ability of the customer to evaluate this data and project the results to their operational environment.

Contents

Evaluation Methodology

We use point-to-point message processing over HTTP to evaluate the performance of the software systems under test. The open source PerfHarness tool is used to concurrently maintain a number of HTTP/TCP connections to the server and repeatedly send prefabricated HTTP payload messages before waiting for, receiving, and discarding the reply message separately on each connection. Once a reply has been received, a new request is sent on the same connection with the same payload as earlier requests.

The connections are kept open for a predefined number of request-response interactions as allowed by the HTTP persistent connections scheme. Once the predefined number is reached, the connection is closed and a new one is opened in its place. This process is repeated on each connection until a predefined time interval runs out, at which point all connections are closed regardless of how many requests have been sent over them, but not before a reply has been received for the request currently in flight over the connection if there is one - and there can only be at most one as we don’t pipeline requests.

We refer to the collective of all request-response interactions that have taken place during this predefined time interval as a test run.

Containers

Tests are run using a simple two-container setup: a client container connected to a server container using IPv4 over Docker’s virtual networking capabilities. The two containers are executing on the same physical host and their lifecycles are managed and synchronized externally through scripting facilities provided by the host operating system.

The client container runs PerfHarness as a standalone Java™ application using the Java Virtual Machine shipped with the product under evaluation. Apart from this difference, the two products are tested using identical client configurations for each combination of test parameters. We place no artificial restrictions on resource usage by the client container and aim to run in a sufficiently overprovisioned environment so that the underlying hardware resources are never exhausted.

The server container runs the product under test in a configuration suitable for the test case and combination of parameters being evaluated. The suitable configuration will depend on the product as well as the test case and parameters; certain configuration options may be available in one product but not in the other. We use equivalent configurations where possible. We further rely on Docker’s resource limiting capabilities to restrict CPU usage by the server, but impose no other resource constraints. We aim to keep the resource usage of the server well below the physical host’s limits.

Containers environment for tests

Both client and server containers are created from pre-built images containing the appropriate client or server product versions. We use separate images for IBM Integration Bus Version 10.0.0.16 server, its corresponding test client, and IBM App Connect Enterprise Version 11.0.0.4 server and its corresponding test client. Containers are kept only for the duration of the test run; they are discarded afterwards and new ones are created for subsequent runs.

Although a comparison of the respective performance characteristics of running the products in containers against those of running them on bare metal hardware is outside the scope of this document, we anticipate that the use of containers presents a small, possibly negligible overhead on the CPU usage figures that will likely not be reflected in our measurements, and that there may be a larger overhead associated with performing I/O operations such as the network activity associated with transferring messages over HTTP [1]. We expect that the products running inside containers will generally not perform better than the same products running on bare metal hardware.

References

[1] Felter, W., Ferreira, A., Rajamony, R., Rubio, J. An Updated Performance Comparison of Virtual Machines and Linux Containers. Technical Report RC25482 (AUS1407-001), July 21, 2014. IBM Research Division, Austin, TX.

Reporting Performance Metrics

During a test run, we count the complete request-response interactions that have taken place between client and server. We compute the average message rate for the run by dividing this figure by the length of the run. Though we do not verify the content of response messages received from the server, we discard the results of the entire test run if any errors occur in the transport layer.

In addition, we monitor CPU utilization from within the server container using OS-provided facilities such as the output of /bin/ps on Linux or Windows Management Instrumentation on Windows with the aim to provide estimates of the processing power consumed per message. The CPU time consumed by the integration server divided by the number of request-response interactions yields the average CPU cost of processing a message over a test run.

Though we report average values over test runs of several minutes, there is no way to guarantee the accuracy of these figures or to account for potential differences in the reporting process on different operating systems.

We ultimately consider the following outputs from a test run:

  • Number of messages processed
  • CPU time consumed by the integration server process

For each test case and combination of message size and concurrency, we conduct approximately 50 test runs each, and report the median over the 50 runs of the following two metrics:

  • Message rate average over the test run
  • Average CPU cost per message over the test run

Test Environment

The container-based test environment has been replicated on the following two system configurations:

  • Red Hat® Enterprise Linux Server 7.5 host with Docker 1.13.1 running Ubuntu 18.04 guest images.
  • Windows Server 2016 Datacenter with Docker Desktop Community 2.0.0.3 (31259), Engine 18.09.2 running Windows Server Core ltsc2016 guest images.

The software configurations are supported by IBM System x3650 M3 hardware with two Intel Xeon X5660 CPUs, six cores each, running at 2.8 GHz with hyperthreading disabled. The test systems have 32 GB of RAM installed.

We compare the performance of the following two software products, testing each on both of the above two system configurations:

  • IBM Integration Bus Version 10.0.0.16
  • IBM App Connect Enterprise Version 11.0.0.4 + LAD76278

    We note that for the purposes of this report, interim fix LAD76278 is installed on top of IBM App Connect Enterprise Version 11.0.0.4. This fix is planned for inclusion in the next maintenance release of IBM App Connect Enterprise Version 11. To ensure the verifiability of the results presented here, we have made this fix available for download on Fix Central for Windows and Linux. We do not intend to build this fix in combination with other fixes. Customers wishing to benefit from the functionality of LAD76278 in combination with other fixes should wait for and upgrade to the release of 11.0.0.5.

Optimization and Tuning

Generally, we aim to test the products in their default configurations unless they are not suitable for the test. In our containerized environment, however, there are two concurrency-related settings that we change from their default values.

Following best practices for serving HTTP traffic with the software running in containers, all test workloads are submitted to the integration servers under test through their embedded HTTP listeners. To make best use of the available CPU capacity, we tune the "ListenerThreads" property of the HTTPConnector Resource Manager in the integration server, configurable via the server.conf.yaml configuration file. This setting determines the number of threads used by the HTTP listener to serve HTTP traffic. We set this value to 2 on Linux and 1 on Windows. This setting only applies to the tests running on IBM App Connect Enterprise Version 11, but not those on IBM Integration Bus Version 10, where all listener configuration parameters take their default values.

With the goal of efficiently using the CPU, we also tune the "Additional instances" property of message flows under the Workload Management group of configuration options, which we always set in line with the concurrency requirements of the test being performed. For each test, we set the total number of message flow instances to three times the number of virtual CPU cores configured on the server container. For our tests, this is a reasonable compromise between excessive context switching overhead and having too few threads to serve the inbound workload.

Test Scenarios

We have designed a number of test cases, described below, to evaluate the performance of the various functional sub-units of the products under test. We perform all test runs in 3 different concurrency configurations, using 3 different message sizes in each (9 configurations altogether for each test scenario):

  • 1, 2, and 4 virtual CPU cores constrained by Docker’s resource limiting capabilities.
  • 2k, 20k, and 200k request messages in the format appropriate for the test case, XML or JSON. The message sizes represent the size of the input messages in the XML format. As the JSON input messages essentially represent the same data as their XML counterparts, they are significantly smaller (approximately 1.3k, 13k, and 130k). Nevertheless, for the sake of simplicity, we will refer to them using the XML message sizes. We also note that we use a different set of XML messages for SOAP interactions than for raw XML over HTTP and that the smallest SOAP message is closer to 4k in size.

Furthermore, all test runs are subject to the following parameters:

  • Test runs are 150 seconds long.
  • The client sends at most 5000 requests per persistent HTTP connection.
  • The client runs 4 times as many client threads (and hence, simultaneous persistent HTTP connections) as the number of virtual server cores. We set the number of flow instances on the server to 3 times the number of virtual server cores.
  • In IBM Integration Bus Version 10.0.0.16, we create a single Integration Node with a single Integration Server, and deploy a pre-packaged BAR file containing a single message flow with its supporting artifacts. In App Connect Enterprise Version 11.0.0.4, we deploy a similar BAR file to a Standalone Integration Server for the purpose of the test. Although we test the two products using the same set of deployable artifacts, we rebuild each of them using the Integration Toolkit that ships with the product.

E.g., for a test run using 4 cores, PerfHarness will run 16 client threads, each maintaining and recycling as appropriate a single persistent HTTP connection throughout the duration of the test run, against a server running 12 message flow instances (Additional Instances property set to 11).

We also provide a PI file of the above test artefacts as well as the complete set of test messages used in this report for reference.

HTTP BLOB Echo

Consisting only of an HTTP Input - HTTP Reply node pair in a single message flow, the HTTP BLOB Echo scenario tests the performance of the HTTP transport layer by echoing the HTTP request body received through the HTTP Input node back as the response through the HTTP Reply node. To minimize the processing overhead on top of what is required for the HTTP transport itself, messages are processed using the BLOB parser.

Test scenario, consisting only of an HTTP Input - HTTP Reply node pair in a single message flow

HTTP XMLNSC Echo

Similarly to the HTTP BLOB Echo scenario, the HTTP XMLNSC Echo test makes use of a single message flow with an HTTP Input - HTTP Reply node pair to echo a request message back to the client untransformed; however, it uses the XMLNSC parser with the "Parse timing" configuration option set to "Complete" in order to evaluate the performance of the XMLNSC parser. As the cost of the HTTP transport is also included in the results from this test, they should be viewed in comparison to the HTTP BLOB Echo results.

Test scenario, consisting only of an HTTP Input - HTTP Reply node pair in a single message flow

HTTP XMLNSC ESQL Transformation

Manipulating messages in the XMLNSC domain, the HTTP XMLNSC ESQL Transformation test case adds an ESQL compute node to the data flow of the HTTP XMLNSC Echo test to perform a simple operation on the message body that requires the entire input message to be parsed, adding up numeric values scattered throughout and writing the sum in the XMLNSC response message. This test case evaluates the performance of a combination of HTTP transport, parsing, serialization, and ESQL transformation in the XMLNSC domain. Its results are most useful in knowledge of the results of the previous two scenarios.

HTTP XMLNSC ESQL Transformation test case adds an ESQL compute node to the data flow of the HTTP XMLNSC Echo test

HTTP XMLNSC Mapper Transformation

The HTTP XMLNSC Mapper Transformation test case is broadly similar to the HTTP XMLNSC ESQL Transformation scenario, however, it uses a Mapping Node to transform the input message, performing simple arithmetics on an XML array therein and producing an XML response with the results. The test case exercises HTTP transport, parsing, serialization, and message transformation using graphical mapping in the XMLNSC domain.

HTTP XMLNSC Mapper Transformation test case uses a Mapping Node to transform the input message

REST Echo

The REST Echo test case tests the HTTP transport layer as well as other basic functionality associated with REST APIs such as the JSON parser and the REST router. The ESQL compute node appearing in the implementation of the only REST operation of the API performs a message copy from the request to the response. Furthermore, in order to force a full message parse without having to change the default parse timing setting on the HTTP Input node embedded in the REST API implementation, it modifies the last element of the request message.

The REST Echo test case tests the HTTP transport layer as well as other basic functionality associated with REST APIs

SOAP Echo

The SOAP Echo test case exercises the HTTP transport layer as well as the SOAP implementation. The "Parse timing" configuration option is set to "Complete" on the SOAP Input node, causing the entire input message to be parsed so as to measure the performance of SOAP parsing and serialization. The SOAP Reply node in the only message flow of the test case is preceded by an ESQL transformation node whose sole function is to transfer the message body from the SOAP request envelope in the inbound message to the SOAP response envelope in the reply.

The SOAP Echo test case exercises the HTTP transport layer as well as the SOAP implementation

Results

Find here the numerical results of the tests for the three platforms they were run on.

Linux

Find here the numerical results of the tests for the nine test scenarios on Linux.

HTTP BLOB Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 1998.06/s 476.71 μs 2761.20/s 363.13 μs
2 5088.72/s 382.01 μs 6040.22/s 331.64 μs
4 11386.17/s 346.02 μs 12557.68/s 318.56 μs
20k 1 1446.23/s 657.77 μs 2198.93/s 455.97 μs
2 3663.90/s 531.19 μs 4924.27/s 406.86 μs
4 8304.51/s 474.78 μs 9974.12/s 400.84 μs
200k 1 519.41/s 1.83 ms 787.66/s 1.27 ms
2 984.30/s 1.95 ms 1603.88/s 1.25 ms
4 1875.44/s 2.1 ms 2516.69/s 1.32 ms

HTTP XMLNSC Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 1652.62/s 575.71 μs 2254.32/s 444.85 μs
2 4126.07/s 470.92 μs 4824.27/s 415.13 μs
4 9028.42/s 436.45 μs 9675.85/s 413.42 μs
20k 1 698.09/s 1.37 ms 937.59/s 1.07 ms
2 1563.11/s 1.24 ms 1922.66/s 1.04 ms
4 3390.37/s 1.16 ms 3766.18/s 1.06 ms
200k 1 98.14/s 9.68 ms 128.43/s 7.82 ms
2 206.67/s 9.41 ms 245.67/s 8.16 ms
4 400.71/s 9.86 ms 458.96/s 8.73 ms

HTTP XMLNSC ESQL Transformation

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 1069.32/s 888.59 μs 1353.83/s 740.61 μs
2 2464.50/s 788.58 μs 2864.31/s 698.7 μs
4 5319.77/s 740.79 μs 5730.41/s 698.06 μs
20k 1 271.68/s 3.52 ms 324.34/s 3.09 ms
2 564.24/s 3.45 ms 641.14/s 3.13 ms
4 1172.05/s 3.37 ms 1272.79/s 3.15 ms
200k 1 30.52/s 31.26 ms 35.70/s 28.08 ms
2 62.42/s 31.13 ms 68.80/s 29.12 ms
4 121.70/s 32.39 ms 131.76/s 30.44 ms

HTTP XMLNSC Mapper Transformation

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 523.68/s 1.86 ms 589.88/s 1.7 ms
2 794.94/s 1.71 ms 899.53/s 1.54 ms
4 889.11/s 1.6 ms 897.77/s 1.58 ms
20k 1 92.67/s 10.38 ms 104.52/s 9.71 ms
2 127.78/s 9.79 ms 139.18/s 8.96 ms
4 143.02/s 9.06 ms 137.44/s 9.33 ms
200k 1 9.48/s 101.56 ms 10.52/s 95.66 ms
2 13.04/s 96.57 ms 13.89/s 90.71 ms
4 14.07/s 92.73 ms 13.78/s 93.54 ms

REST Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 1239.60/s 767.52 μs 1732.24/s 578.94 μs
2 2919.28/s 665.47 μs 3764.44/s 530.97 μs
4 6308.69/s 624.94 μs 7434.89/s 537.97 μs
20k 1 625.77/s 1.52 ms 875.08/s 1.15 ms
2 1405.80/s 1.38 ms 1791.58/s 1.12 ms
4 2999.69/s 1.31 ms 3436.38/s 1.16 ms
200k 1 113.78/s 8.43 ms 143.84/s 6.98 ms
2 227.85/s 8.55 ms 272.14/s 7.36 ms
4 415.11/s 9.51 ms 449.70/s 8.91 ms

SOAP Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 773.83/s 1.23 ms 787.50/s 1.27 ms
2 1719.39/s 1.14 ms 1687.33/s 1.19 ms
4 3636.99/s 1.08 ms 3445.12/s 1.16 ms
20k 1 282.45/s 3.37 ms 305.76/s 3.28 ms
2 601.88/s 3.25 ms 623.51/s 3.21 ms
4 1220.03/s 3.23 ms 1236.36/s 3.23 ms
200k 1 35.81/s 26.6 ms 39.00/s 25.69 ms
2 74.39/s 26.28 ms 77.35/s 25.89 ms
4 140.75/s 28.02 ms 145.72/s 27.49 ms

Windows

Find here the numerical results of the tests for the nine test scenarios on Windows.

HTTP BLOB Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s] Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 1387.45/s 315.59 μs 1307.12/s 260.74 μs
2 4212.70/s 535.2 μs 3963.07/s 501.89 μs
4 9715.74/s 858.38 μs 11973.35/s 932.32 μs
20k 1 1300.10/s 1.49 ms 1278.15/s 1.06 ms
2 2968.64/s 1.82 ms 4130.18/s 1.58 ms
4 6231.31/s 2.07 ms 8043.01/s 2 ms
200k 1 295.48/s 7.26 ms 503.65/s 6.77 ms
2 679.16/s 11.76 ms 1087.63/s 9.72 ms
4 1320.14/s 13.49 ms 1856.17/s 12.61 ms

HTTP XMLNSC Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 720.65/s 290.56 μs 724.39/s 232.2 μs
2 3267.69/s 591.25 μs 3275.30/s 517.88 μs
4 7977.81/s 932.37 μs 9365.84/s 1.01 ms
20k 1 656.65/s 2.17 ms 470.87/s 689.89 μs
2 1413.46/s 2.48 ms 1736.20/s 1.86 ms
4 2934.51/s 2.65 ms 3457.92/s 2.53 ms
200k 1 89.14/s 14.21 ms 120.67/s 11.3 ms
2 178.84/s 16.96 ms 233.41/s 13.29 ms
4 359.22/s 19.14 ms 425.49/s 17.4 ms

HTTP XMLNSC ESQL Transformation

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 556.25/s 359.22 μs 602.69/s 299.5 μs
2 1861.97/s 727.15 μs 1939.80/s 644.34 μs
4 4563.21/s 1.24 ms 5330.61/s 1.29 ms
20k 1 242.80/s 4.23 ms 300.67/s 2.95 ms
2 517.67/s 4.32 ms 619.33/s 3.52 ms
4 1066.89/s 4.54 ms 1214.19/s 3.75 ms
200k 1 26.81/s 39.11 ms 34.55/s 29.19 ms
2 56.23/s 43.46 ms 66.72/s 33 ms
4 114.51/s 40.31 ms 125.29/s 37.94 ms

HTTP XMLNSC Mapper Transformation

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 388.13/s 1.65 ms 383.03/s 1.88 ms
2 730.22/s 2.03 ms 782.95/s 2.09 ms
4 786.71/s 2.2 ms 850.65/s 2.24 ms
20k 1 90.65/s 11.28 ms 90.19/s 11.82 ms
2 126.53/s 10.73 ms 127.68/s 10.62 ms
4 129.09/s 11.02 ms 128.97/s 10.93 ms
200k 1 9.02/s 99.26 ms 9.29/s 104.22 ms
2 13.08/s 96.68 ms 12.96/s 95.83 ms
4 13.09/s 101.06 ms 12.91/s 99.03 ms

REST Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 623.56/s 335.17 μs 659.82/s 188.34 μs
2 2126.02/s 605.73 μs 2370.85/s 524.75 μs
4 5292.89/s 1.15 ms 6871.60/s 1.15 ms
20k 1 548.00/s 2.33 ms 484.50/s 912.18 μs
2 1163.27/s 2.48 ms 1400.35/s 1.4 ms
4 2423.52/s 2.59 ms 3014.05/s 2.19 ms
200k 1 87.86/s 12.89 ms 123.53/s 13.18 ms
2 174.59/s 15.02 ms 235.04/s 11.52 ms
4 324.50/s 17.38 ms 379.70/s 15.71 ms

SOAP Echo

Test results for IIB 10.0.0.16 - ACE 11.0.0.4

Message Rate [1/s]
Message Rate [1/s]
CPU Cost Per Message
CPU Cost Per Message
Message Size Concurrency 10.0.0.16 11.0.0.4 + LAD76278
Rate CPU cost Rate CPU cost
2k 1 539.57/s 728.29 μs 529.87/s 826.54 μs
2 1380.36/s 993.32 μs 1447.86/s 1.19 ms
4 3370.31/s 1.53 ms 3205.07/s 1.76 ms
20k 1 259.12/s 4.25 ms 285.89/s 3.09 ms
2 557.41/s 4.42 ms 609.00/s 4.32 ms
4 1139.89/s 4.64 ms 1129.31/s 4.74 ms
200k 1 33.52/s 32.06 ms 39.75/s 27.91 ms
2 70.28/s 32.59 ms 77.42/s 29.17 ms
4 134.15/s 36.23 ms 137.77/s 36.99 ms

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at Copyright and trademark information at www.ibm.com/legal/copytrade.shtml.

Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliate.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.

Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

[{"Business Unit":{"code":"BU053","label":"Cloud & Data Platform"},"Product":{"code":"SSDR5J","label":"IBM App Connect Enterprise"},"ARM Category":[{"code":"a8m0z000000TN3pAAG","label":"ACEv11->ACE on Docker"},{"code":"a8m0z000000cwQwAAI","label":"ACEv11->Performance"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"},{"code":"PF033","label":"Windows"}],"Version":"11.0.0","Line of Business":{"code":"LOB45","label":"Automation"}}]

Document Information

Modified date:
08 July 2020

UID

ibm16232492