IBM Support see a lot of customers using the DefaultMessageListenerContainer provided by the Spring Framework to monitor JMS destinations for messages.
One of the properties of the DefaultMessageListenerContainer is receiveTimeout. This property specifies how often the threads created by the DefaultMessageListenerContainer will poll a JMS destination looking for messages. The default value of this property is 1000ms, which equates to 1 second. The default value could be suitable for some JMS providers. However, it can result in performance issues if the DefaultMessageListenerContainer is monitoring a JMS destination hosted on an IBM MQ queue manager.
When the property is set to its default value, the threads used by the DefaultMessageListenerContainer will issue MQGET calls with a 1 second wait interval. This drives a lot of work on the queue manager, and the CHIN process for IBM MQ for z/OS queue managers, and frequently results in queue manager processes reporting 100% CPU.
Because of this, if you are considering using the DefaultMessageListenerContainer to monitor JMS destinations hosted on IBM MQ queue managers, then you should increase the value of the receiveTimeout property to a larger value such as 60000ms (1 minute). This will result in the threads used by the DefaultMessageListenerContainer making fewer MQGET calls, with longer intervals between each call, which reduces the load on the queue manager and CHIN process. The queue manger will still notify the DefaultMessageListenerContainer as soon as a message arrives on the JMS destination, so there is no danger of messages sitting on the destination undetected until the next poll. Your queue manager will also perform much better too!