During a test run, we count the complete request-response interactions that have taken place between client and server. We compute the average message rate for the run by dividing this figure by the length of the run. Though we do not verify the content of response messages received from the server, we discard the results of the entire test run if any errors occur in the transport layer.

We monitor CPU utilization on the server using OS-provided facilities such as the output of /bin/ps and information in the /proc filesystem on Linux and AIX, or Windows Management Instrumentation on Windows with the aim to provide estimates of the processing power consumed per message. The CPU time consumed by the integration server divided by the number of request-response interactions yields the average CPU cost of processing a message over a test run.

We restrict the CPU capacity of the test server available to the software system under test using OS-provided facilities such as CPU affinity masks on Linux and Windows, or Workload Manager on AIX.

Though we report average values over test runs of several minutes, there is no way to guarantee the accuracy of these figures or to account for potential differences in the reporting on different operating systems.

We ultimately consider the following outputs from a test run:

  • Number of messages processed
  • CPU time consumed by the integration server process

For each test case and combination of message size and concurrency, we conduct at least 50 test runs each, and report the medians over the set of runs of the following two metrics:

  • Message rate average over the test run
  • Average CPU cost per message over the test run

Join The Discussion

Your email address will not be published. Required fields are marked *