2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

Scaling an event-driven architecture using an event-driven autoscaler


This code pattern demonstrates one of the use cases of Kafka in an event driven microservices architecture. The code pattern will show you how to autoscale your microservices in Red Hat OpenShift using KEDA (Kubernetes-based Event Driven Autoscaler). The microservices will be scaled based on the consumer lag of a Kafka consumer group.


In this code pattern, you will deploy an example food delivery application that is using Kafka and Red Hat OpenShift. The example application is using a Kafka topic to produce and consume records of orders. The application has multiple microservices that consume and process these records (messages). The architecture below explains the roles of these microservices. To scale these microservices based on the incoming messages instead of the default Horizontal Pod Autoscaler (HPA) that uses CPU and memory thresholds, you will use KEDA, an open source project.

With the KEDA operator, you can scale your OpenShift resources based on events. In the case of Kafka, you can scale them based on the consumer lag. Consumer lag is the difference between the most recently produced message and the current message that’s consumed. If the consumer lag starts to grow, this usually means that the consumer is not able to keep up with the incoming messages in a Kafka topic. With KEDA, you can autoscale the number of consumers so that your group of consumers can consume and process more messages to try and keep up with the pace of incoming messages. KEDA also supports more event sources besides Kafka (such as IBM MQ).

This code pattern also works with the Confluent Platform for IBM Cloud Pak for Integration.


Scaling Kafka architecture

  1. The user starts the simulator from the frontend. The simulator sends requests to the API service to create orders. The API service is using an asynchronous REST method. Requests get a correlation ID.
  2. The API service produces the message to the Kafka topic.
  3. The order consumer picks up the message and processes it. This service is responsible for validating the transaction. This microservice can also produce messages to the Kafka topic.
  4. The status consumer consumes the message when it sees that the transaction is created and validated by the order consumer.
  5. The status consumer updates the Redis database with the result of the transaction. This result is keyed with the correlation ID. The status microservice is responsible for updating the REST requests’ correlation ID with the response. It’s also responsible for serving the API service for it to fetch the responses.
  6. The frontend then polls for a response from the API service using the correlation ID. The API service fetches it from the status microservice.
  7. The restaurant microservice is subscribed to the Kafka topic so restaurants know when to start preparing the order. It also produces a message so that the courier consumer knows when to pick it up.
  8. The courier microservice is subscribed so it gets notified when the order is ready for pick up. It also produces a message when the order is complete so that the order consumer can update the transaction in its database.
  9. The realtime data is subscribed to the Kafka topic so it can serve the events in the simulator’s graph. The pod data is providing the data for the number of pods of the consumer microservices to the frontend’s architecture image.
  10. KEDA will scale the number of pods of the consumer microservices when it reaches a certain consumer lag that is set in the deployment.


Find the detailed steps for this pattern in the readme file. The steps will show you how to:

  1. Clone the repo.
  2. Create and configure the Kafka service.
  3. Deploy the microservices.
  4. Install KEDA.
  5. Deploy KEDA ScaledObjects.
  6. Run the application.