Build event-driven microservices

In this next step of our application modernization journey, we will use an event-driven architecture pattern to build loosely coupled event-driven microservices.

Distributed systems have several advantages, for example resiliency and horizontal scalability. At the same time, they introduce new challenges compared to classic monolithic systems related to the networking overhead between services. When breaking down monoliths into microservices, the goal is to minimize the dependencies between the services.

While in the optimal case strangled services are independent from the monoliths, in reality there are often still some dependencies. Even when using more course-grained micro- or macroservices the main challenge is how to handle dependencies between them. Synchronous invocations can easily lead to distributed monoliths again. As always, there is not one single solution.

One possible solution you should consider when refactoring monoliths to microservices is event-driven architectures rather than synchronous REST invocations. Basically, there are less dependencies and less (or no) blocking which leads to more loosely coupled services. However, as long as multiple services need to work together in bigger systems there are obviously some dependencies.

Here are some of the architectural patterns you should be familiar with when developing event-driven architectures.

  • Event notifications: Events are sent to notify other services which can call back to find out details if necessary
  • Event notifications with state transfer: Events are sent to other services including the new states of objects
  • Event sourcing: All events are made persistent and the overall state of systems are aggregations of these events (which also allows time travels)
  • Command Query Responsibility Segregation (CQRS): Read operations are optimized in separate components. More expensive write operations are handled separately and need to be synchronized.
  • Command pattern: Commands are sent asynchronously and responses are sent back as events as well


Make sure that you’ve reviewed the sample application that we will use for the tutorials in this learning path.

Next, if you haven’t already done so, clone the repo to get the complete source code for the sample application:

$ git clone && cd application-modernization-javaee-quarkus
$ ROOT_FOLDER=$(pwd)


  1. Build the event producer
  2. Build the event consumer

Step 1. Build the event producer

In a previous tutorial in this learning path, I explained how to use the Strangler pattern to refactor the catalog functionality as a microservice. The advantage of splitting the catalog functionality from the monolith is that this part of the application can be scaled independently from the rest of the application. Economically, this means you need less resources like CPU and memory which leads to less costs.

The catalog functionality is pretty separate from the functionality of the remaining monolith, for example account information and orders. However there still are some dependencies. Let’s say users added products from the catalog to their shopping carts. When the prices (or titles, ratings, descriptions, and so on) of these products change, there should be some indications in the shopping cart user interface.

To display the new price, the order service (the remaining monolith) can invoke a synchronous REST API of the catalog service. However, to minimize the coupling between these components, I’m using events instead.

Here is the code of the strangled catalog service that has been implemented with Quarkus. This microservice uses MicroProfile and Apache Kafka to send events asynchronously.

    public Product update(@PathParam("id") Long id, Product updatedProduct) {        
        System.out.println("/CustomerOrderServicesWeb/jaxrs/Product @PUT updateProduct invoked in Quarkus catalog service");
        Product existingProduct = entityManager.find(Product.class, id);
        if (existingProduct == null) {
            throw new WebApplicationException(Response.Status.BAD_REQUEST);
        existingProduct.price = updatedProduct.price;

        sendMessageToKafka(, existingProduct.price);
        return existingProduct;        

    @ConfigProperty(name = "kafka.bootstrap.servers")
    String kafkaBootstrapServer;
    Vertx vertx;
    private KafkaProducer<String, String> producer;
    void initKafkaClient() {
        Map<String, String> config = new HashMap<>();
        config.put("bootstrap.servers", kafkaBootstrapServer);
        config.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        config.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        System.out.println("bootstrapping Kafka with config: " + config);
        producer = KafkaProducer.create(vertx, config);
    public void sendMessageToKafka(Long productId, BigDecimal price) {
        String productIdString = productId.toString();
        String priceString = price.toString();
        try {
            KafkaProducerRecord<String, String> record = KafkaProducerRecord.create("product-price-updated", productIdString + "#" + priceString);
            producer.write(record, done -> System.out.println("Kafka message sent: product-price-updated - " + productIdString + "#" + priceString));
        } catch (Exception e) {

Step 2. Build the event consumer

The remaining monolith runs on Open Liberty. To prevent synchronous invocations of the catalog service, it caches the changed prices in its own Db2 database in a new column.

Here is the code of the remaining monolith, which also uses MicroProfile.

import org.eclipse.microprofile.reactive.messaging.Incoming;
import javax.enterprise.context.ApplicationScoped;
import org.pwte.example.service.CustomerOrderServicesImpl;
import javax.inject.Inject;

public class ProductPriceChanged {

CustomerOrderServicesImpl customerOrderServices;        

public String process(String message) {
   String productId = "";
   String newPrice = "0";
   try {
      productId = message.substring(0, message.indexOf("#"));
      newPrice = message.substring(message.indexOf("#") + 1, message.length());
      customerOrderServices.updateLineItem(productId, newPrice);
   catch (Exception e) {}        
      return message;

Summary and next steps

In this tutorial, we took the next step in modernizing our sample application by creating loosely coupled event-driven microservices. We did this by using annotations in MicroProfile Messaging for both sending and consuming events and by using the Kafka API to trigger events manually (programmatically). However, today, you could simply use just MicroProfile to trigger events because of improvements to the outgoing annotation.

To learn more about MicroProfile Messaging and Reactive Messaging, review this content: