During a test run, we count the complete request-response interactions that have taken place between client and server. We compute the average message rate for the run by dividing this figure by the length of the run. Though we do not verify the content of response messages received from the server, we discard the results of the entire test run if any errors occur in the transport layer.

In addition, we monitor CPU utilization from within the server container using OS-provided facilities such as the output of /bin/ps on Linux or Windows Management Instrumentation on Windows with the aim to provide estimates of the processing power consumed per message. The CPU time consumed by the integration server divided by the number of request-response interactions yields the average CPU cost of processing a message over a test run.

Though we report average values over test runs of several minutes, there is no way to guarantee the accuracy of these figures or to account for potential differences in the reporting process on different operating systems.

We ultimately consider the following outputs from a test run:

  • Number of messages processed
  • CPU time consumed by the integration server process

For each test case and combination of message size and concurrency, we conduct approximately 50 test runs each, and report the median over the 50 runs of the following two metrics:

  • Message rate average over the test run
  • Average CPU cost per message over the test run

Join The Discussion

Your email address will not be published. Required fields are marked *