Win $20,000. Help build the future of education. Answer the call. Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Use KEDA to enhance your cloud-native messaging applications with improved scaling capabilities

Great news: you’ve just written your first messaging application with IBM MQ. Your messaging application is well encapsulated, you’ve followed reactive principles, and you’re ready to deploy it to your cloud service. Your code is elegant – it takes a message from a queue, performs a task, and then moves on to the next one. Your application will doubtlessly be efficient and consume tiny amounts of compute resource in CPU and memory.

As your app runs natively in the cloud, you can expect container orchestration to provide a basic autoscaling mechanism for free. If the container starts to get busy, then Kubernetes will step in to provision more instances of the app. However, in this scenario, we have a different problem: While the system is busy and the app is working as hard as it can, the CPU and memory consumption is low so the autoscaler won’t detect that messages are backing up on a queue. In turn, this can result in a noticeable delay in response times as the increased load is not recognized or in the worst case a full queue that is no longer capable of receiving new messages.

The need for autoscaling

Why would messages build up, and why does this matter? Asynchronous messaging allows applications to be decoupled. If one part of the system becomes busy, it can work independently from the rest of the system. The queue acts as a shock absorber allowing the application components to work at different speeds, making it ideal for cloud applications and microservices so that they can be in distributed geographical locations.

This is great for each application component, but an end user might start to see response time deteriorate. Ideally we need some way of telling the autoscaler that our queue is filling up and to start scaling our application.

This is where KEDA comes in.

KEDA is an open-source project centered around Kubernetes-based event-driven autoscaling. It’s an Apache 2.0-licensed project led by Microsoft and Red Hat, and it is also listed as a CNCF sandbox project. To find out more about KEDA, check out this introduction to KEDA.

KEDA provides a collection of built-in scalers that allow cloud-native developers to extend Kubernetes Horizontal Pod Autoscaling functionality. This extension includes a variety of vendor-specific metrics, without any overwriting or duplication of the existing Kubernetes capabilities.

IBM MQ is a recent addition to the KEDA scaler collection that supports application-layer scaling based on the queue depth metric.

Getting started with IBM MQ and KEDA

To help you create an application that scales efficiently based on the queue depth, you can use the KEDA scaler for IBM MQ. We have created an easy guide with step-by-step instructions for getting up and running with sample code. The guide will help you to:

  • Deploy IBM MQ locally or on IBM Cloud
  • Set up an MQ Queue Manager and Queue to which you can send messages
  • Deploy KEDA in your chosen environment
  • Deploy a sample consumer application linked to KEDA that will receive MQ Queue messages
  • Deploy a producer application to send messages to your MQ Queue instance
  • Monitor your application to see it scaling when the queue depth fluctuates

The following video details this process and demos what can be achieved when IBM MQ and KEDA are working in tandem:

Summary and next steps

Now that you’re familiar with the IBM MQ KEDA Scaler, you can look to enhancing your own cloud-native messaging applications with these improved scaling capabilities.

You can learn more about developing applications with IBM MQ and get your MQ Developer Essentials badge. Or, perhaps you’d like to learn more about building asynchronous message-driven reactive systems.

You can also join the KEDA community on their dedicated Slack channel or participate in their bi-weekly community meetings. If you want to get hands on with some code and contribute to the project, kick around in the KEDA GitHub.