Digital Developer Conference: Hybrid Cloud 2021. On Sep 21, gain free hybrid cloud skills from experts and partners. Register now

Auto-scale message consumers by queue size

Orchestration tools of containers has been quite popular these days since it provides lots of advantages while managing IT workloads on both public and private cloud comparing to traditional models. Intelligent scheduling, self-healing, automatic load-balancing and auto-scaling just a few of these advantages.

In this tutorial, we will use Red Hat® OpenShift® version 3.11. OpenShift is an enterprise-ready Kubernetes container platform with full-stack automated operations to manage hybrid cloud and multicloud deployments.

On OpenShift, Auto-scaling based on memory and cpu usage is default feature. What we will do in this tutorial is auto-scale our application based on queue size on RabbitMQ.

We will basically deploy a RabbitMQ cluster, a producer application which will produce messages to a specific queue on RabbitMQ cluster, a worker application which will consume messages from the queue and a RabbitMQ auto-scaler which will check the queue size and scale our worker deployment based on some parameters we provide.

We will use:

Prerequisites

Estimated Time

You can complete this tutorial in about 30 minutes.

Steps

Step 1. Deploy RabbitMQ Cluster into OpenShift

First, let’s create a project in OpenShift and select this project.

Screen shot of OpenShift, create a project dialog

Then, we need to add a RabbitMQ Template to the OpenShift Catalog on the same project. Select Import YAML/JSON from the Add to Project button at the top of the screen, and paste this RabbitMQ Template yaml file in to the yaml text box.

Screen shot of Import YAML/JSON dialog in OpenShift

Then, click Create.

Select Process the Template on the form that is displayed, and then click Continue.

On the Template configuration page, specify the project name and a password (which is all we need for this tutorial). Don’t forget to note your RabbitMQ user and password.

Screen shot of the Template Configuration page in OpenShift

Click Create.

Because you are defining role-binding and network policy, Red Hat OpenShift needs a second confirmation step here. You can find the detailed information in the OpenShift documentation.

After a few minutes, you should see your pods running successfully.

Screen shot of the RabbitMQ app running in OpenShift

Now, let’s create a Route for the rabbitmq-cluster-balancer service. Services on OpenShift provide internal access to pods. You can expose those services to external traffic by creating a “Route” for them.

When you examine rabbitmq-cluster-balancer service, you can see it has two ports. The “5672” port will be used by our applications while producing and consuming messages. Note the IP and port 5672 of rabbitmq-cluster-balancer service. Also note the Hostname of the service in this screen; we will use this hostname in the last step. Port 15672 is used for the RabbitMQ Management Portal, which we will reach the RabbitMQ Management Portal UI from this port.

Screen shot of rabbitmq-cluster-balancer service details in OpenShift

You can also have external access to the RabbitMQ Management Portal by simply clicking Create route button on the same screen. By default, it will create a route for the 15672 port. You can leave everything as it is and just click the Create button.

Screen shot of the Create Route dialog in OpenShift

You should now see the hostname for the “15672” port on the same rabbitmq-cluster-balancer service page.

To access the RabbitMQ Management Portal, go to this hostname and use the same RabbitMQ user and password that you specified when creating the RabbitMQ Cluster to sign in to the portal.

Step 2. Deploy Producer Application into OpenShift

Now, we will deploy our producer application. We will give OpenShift a deployment yaml file that contains a custom Docker image that I built. This yaml file will create a queue on the RabbitMQ cluster and produce a message to that queue.

On the same project, again use the Add to Project button and select Import YAML/JSON. Paste the producer.yaml file, and remember to change the RabbitMQ Credentials to be your credentials.

Click Create, and verify that your producer pod is successfully running.

Step 3. Producing Messages to RabbitMQ

For producing messages to RabbitMQ, go to Applications and Pods from the menu on the left in OpenShift. You can find our producer pod over there. When you click on the producer pod’s name, you see the details about the pod. Click the Terminal tab to use the producer pod’s terminal.

Screen shot of the producer pod details with Terminal displayed in OpenShift

When you enter python producer.py in the terminal, a message is produced to the RabbitMQ queue named “test.” You can also see this queue in the RabbitMQ Management Console:

Screen shot of the queue in the RabbitMQ Management Console

Step 4. Deploy Worker Application into OpenShift

Similar to Step 2, we will now deploy our worker application by deploying a yaml file. Again, use the Add to Project button and select Import YAML/JSON. Paste the worker.yaml file.

Do not forget to enter your RabbitMQ Cluster’s credentials and click Create.

Step 5. Checking Worker Application Successfully Consumes Messages From RabbitMQ

This step is important in order to fully understand what is going on. When the worker application starts running, it also starts to listen to the “test” queue on RabbitMQ. If you check the RabbitMQ Management Console, you can see that there is no message on the “test” queue anymore, since the worker application consumed it.

In order to mimic a time consuming workload, whenever the worker application consumes a message from the queue, it waits 10 seconds, and then it acknowledges RabbitMQ that it did its job with the message. In this way, RabbitMQ understands that the message was processed in a proper manner, and there is no need to keep this message in memory and let the worker application move on to consume the next message in the queue.

Go to Applications and Pods from the menu on the left in OpenShift, but this time select the worker pod and go to the Logs tab.

Screen shot of the worker pod with the Logs tab selected in OpenShift

You can see in the logs that the worker pod is ready to receive messages. It received the first message that we already produced from the producer pod and after 10 seconds it did its job with the message and wrote “[x] Done” to the log.

Since the worker pod has done its job with the message, it is ready to consume another one. You can repeat Step 3 to test your application and observe messages in the RabbitMQ Management Console.

Step 6. Deploy the RabbitMQ Auto-Scaler

Important: To deploy RabbitMQ auto-scaler, we need to switch our project to “kube-system” and use the “kube-system” project for all of this step.

Now, we have 1 worker pod, and every message has 10 seconds process time. If 10 messages are produced, it takes 110 seconds to process last message. What if we don’t want our users to wait more than 30 seconds to in order to process this message?

The auto-scaler application can scale-up our worker numbers based on a few parameters that we will provide. You can read a detailed explanation about the k8s-rabbit-pod-autoscaler in this GitHub repo.

In the “kube-system” project, use the Add to Project button and select Import YAML/JSON. Paste the clusterRole.yaml file, and click Create. We are defining “ClusterRole” which can be used to grant access to resources within the cluster.

Do the same step for the serviceAccount.yaml file also. This time we are defining a “ServiceAccount”. ServiceAccounts are used by containers running in a Pod to communicate with the API server of the Kubernetes cluster.

And, again, do the same step for the clusterRoleBinding.yaml file. This time we are defining a “ClusterRoleBinding”. Basically, we are binding the “ClusterRole” to “ServiceAccount”.

Now, we will define RabbitMQ auto-scaler deployment yaml. As you can see below, we use “ServiceAccount” in the deployment yaml. In this way, “rabbit-pod-autoscaler” pods can use certain commands (the commands we specified when we define “ClusterRole”) on certain resources (the resources we also specified when we define same “ClusterRole”)

Copy the auto-scaler-deployment.yaml file and paste it by using the same “Import YAML/JSON” button. Do not forget, we are using the “kube-system” project.

Before clicking the Create button, you need to update some parameters in this yaml. Check the documentation again to understand the parameters that we will enter.

You can click Create after you specify the necessary parameters.

As an example, here are the parameters that I entered:

  • For “INTERVAL” – I entered 5 seconds. Every 5 seconds, RabbitMQ auto-scaler will check the queue to see how many messages are waiting in the queue.

  • For “RAABIT_HOST” – I used the rabbitmq-cluster-balancer service’s hostname in my cluster which is rabbitmq-cluster-balancer.tutorial.svc.cluster.local.

  • For “RABBIT_USER” and “RABBIT_PASS” – I entered the same credentials that I specified when I created RabbitMQ Cluster.

  • For “AUTOSCALING” – I entered “1|5|3|tutorial|worker|test”. This means that the system scales down to one pod minimum and scales up to five pods maximum. If there are more than 3 messages in the queue, RabbitMQ auto-scaler scales up the number of pods in the “worker” deployment of the “tutorial” project to 2. If there more than 6 messages in the queue, RabbitMQ auto-scaler scales up the number of pods to 3. Until there are five workers pods, the scaling-up process will continue. To put a break point for resource limitation, the scaling-up process will stop at five pods as we provided. Scaling-down also has the same logic. The system scales down based on the number of messages in the queue until there is only one pod alive. Even if there are no messages in the queue, one pod will be up and ready to process the potential next message.

Another way of looking at this configuration is that we are tolerating a maximum of 40 seconds process time for the users of this system or application.

Step 7. Testing of RabbitMQ Auto-Scaler

To test our application, in one tab on your browser, repeat Step 3 as many times you want. It will produce one message every time you enter python producer.py in the terminal. Don’t forget that every message has 10 seconds process time when it is consumed from the queue by worker application.

Screen shot of the terminal for the producer app in OpenShift

In another tab, open the logs of the rabbit-pod-autoscaler pod in the “kube-system” project.

Screen shot of the logs of the autoscaler pod in OpenShift

You will see the logs related to number of messages on the queue and the scaling update.

Screen shot of the autoscaler logs in OpenShift

You can also check the “worker” deployment in your tutorial project, in order to see scaling process.

alt

You can see number of the pods changing in this screen. This means, OpenShift automatically increasing the resources for the worker application. In this way, inputs can be processed in specific time periods and possible delays can be prevented.

alt

Summary

In this tutorial, you learned how to quickly deploy a RabbitMQ Cluster from a template on OpenShift. You also learned how to make simple deployments of different applications on OpenShift. Finally, you learned how to auto-scale applications based on RabbitMQ queue size.

While we used the OpenShift Management Console to complete all the steps, you can also use the OpenShift Client Tool-oc from a terminal window.

If you are interested in learning more about OpenShift, you can check Kubernetes with OpenShift 101: Exercises to get you started with Running Node-RED on OpenShift. You can also explore the OpenShift Documentation.