The amount of data being produced every day is growing exponentially. Whether that data is updates from sensors, clicks on a website, or internal inputs like system changes, applications are expected to handle this never-ending stream of new events. So, can we architect our applications in a way that enables us to put these events at the heart of our systems? And, what benefits would this architecture give us?
In this article we’ll be exploring these questions, covering what is an event-driven architecture (EDA) and how this architecture pattern places events at the heart of systems. Then, we’ll explore several of the most significant advantages of implementing this architecture pattern.
What are events?
First, let’s explain what events are. Events are records of something that has happened, a change in state. They are immutable (they cannot be changed or deleted), and they are ordered in sequence of their creation. Interested parties can be notified of these state changes by subscribing to published events and then act on the information using their chosen business logic.
What is event-driven architecture?
Event-driven architecture refers to a system of loosely coupled microservices that exchange information between each other through the production and consumption of events. An event-driven system enables messages to be ingested into the event driven ecosystem and then broadcast out to whichever services are interested in receiving them.
To better explain event-driven architecture, let’s take a look at an example of an event-driven architecture. The following diagram shows a simplified taxi ordering scenario. In this diagram, we have 3 of the microservices that could be involved in such a scenario: a UI service where a customer can order a taxi, a fleet service that assigns taxis to orders, and a taxi car service that collects data about the individual taxis such as their current location. The cylinder in the center of the diagram, which links the different microservices, represents the event-driven messaging backbone of our system (which you could be implement with something like Apache Kafka). The arrows within the diagram represent the flow of events (and thus state changes) within the system. This flow can be explained as follows:
- The customer places an order for a taxi via the Customer Order UI. This UI captures information such as the customer’s current location, name, and so on.
- The Taxi Fleet Service subscribes to
- The Taxi Car Service collects data from the individual taxis, such as each taxi’s current location, and sends the
send current locationevents.
- The Taxi Fleet Service, which is subscribed to the
send current locationevents, allocates the nearest taxi to the customer and sends an
allocate nearest taxievent.
- The Taxi Car Service subscribes to the
allocate nearest taxievents and alerts the driver that it needs to pick up a customer.
- The Taxi Fleet Service can constantly monitor the location of the taxi and update the UI with ETA notifications for the customer.
An event-driven architecture leverages a messaging backbone to deliver messages from producers to consumers. This messaging backbone can either be based on a traditional publish-subscribe message broker (such as IBM MQ) or a distributed log (such as Apache Kafka). A publish-subscribe message broker allows multiple consumers to subscribe to groups of messages. Message are often deleted once all subscribers have received them. In contrast, a log is an unbounded set of ordered events. Consumers keep track of where they are in the stream using offsets. In an event stream, events are replayable as the data can, theoretically, be kept indefinitely. This means a new consumer can choose to subscribe to events and they can read the log from the beginning if they so choose.
The eventing system that you choose will depend on the nature of the specific use case. Things like persistence, the size and frequency of events, or the nature of producers (such as IoT sensors) will all be driving factors.
Advantages of using an event-driven architecture
Many modern applications are rapidly supporting event-driven architectures. Why is this? What benefits do they offer?
Event-driven architecture is an architectural approach. Applications written in any language on any platform can use this architecture pattern. Here, we will explore some of the advantages of adopting an event-driven architecture.
True decoupling of producers and consumers
Systems that use an event-driven architecture decouples the components in the system which separates the ownership of data by domain. This decoupling enables a logical separation between production and consumption of events.
- Producers do not need to concern themselves with how the events they produce are going to be consumed (so additional consumers can be added without affecting the producers).
- Consumers do not need to concern themselves with how they were produced.
Because of this loose coupling, microservices can be implemented in different languages or use technologies that are different and appropriate for specific jobs. Therefore, the encoding of the event data does not matter – it can be JSON, XML, Avro, and so on.
Decoupling the components of an application also enables them to be scaled easily and independently of each other across the network. Developers can revise their system by adding or removing event producers and consumers dynamically without needing to change any logic in any of the microservices.
None of the producing services need to know about the services that consume the events they produce. Similarly, when any of the services consume messages, they only need to subscribe to the event stream.
In the example diagram above, we could easily add on a Taxi Finance Service that subscribes to the Taxi Car Service events and collects the data, which could include fuel consumption. With this added microservice, the Taxi Car Service could then suggest the best fare or send new events about driver efficiency, which both could be used by the Taxi Fleet Service as a factor when allocating drivers to orders.
The loose coupling of components that an event-driven architecture delivers also means that services do not need to worry about the status or health of another service. This loose coupling offers a level of resiliency within the system, so if one microservice is brought down, the application is still able to continue running in its absence. This is achieved by events being stored in the messaging backbone so that the consuming service can pick them up when it recovers.
While resiliency is not specific to event-driven architectures, the nature of how events arrive offers an additional advantage. Eventing is asynchronous, which means events are published as they happen. Services consume the events as an unbounded stream, and they keep track of where they get to. So, if services fail, they can pick up from where they got to and, if necessary, replay events that may have failed. The producing service is not affected, it can keep producing events. This is in contrast with REST architectures, which are synchronous, so peer services must be up and retry logic must be implemented to cope with network failures.
For example, an event-driven architecture can be useful in situations where you have edge devices that are prone to going offline. Once the edge devices come back up, the events can still be processed by the client. For example in shipping, let’s take a look at smart shipping containers. Smart shipping containers are containers that collect and analyze telemetry data about the health of the container and send summary data back to a central hub at regular intervals. Networks can often be unreliable on ships, so if you had some smart shipping containers onboard a ship and they went offline, once they come back up, an onshore consumer can still receive the messages. Similarly, if you had some updates that needed to be sent to the edge from onshore, these would still be received once the edge services were back online.
In a pull-based messaging system, there is a request/response mechanism. The client polls for messages at intervals. Event-driven systems allow for easy push-based messaging through the presence of an intermediary broker.
In event-driven architectures, clients can receive updates without needing to poll. Updates can be received as they happen, which can be powerful for on-the-fly data transformation, analysis, and data science processes.
In an example of a web service that is interacting with clients, the client wants an immediate result. Rather than having to poll continuously, the event will get pushed once it arrives. Because a service no longer needs to poll, depending on the type of workload, there can also be a reduction in network I/O.
History of business narrative
We’re probably all familiar with the term “single source of truth” (the practice of structuring information models and associated data so that every data element is edited in only one place). By using an event-driven ecosystem, you can achieve this “single source of truth.”
As mentioned above, an event stream should be an immutable stream of facts, where each fact is represented by an event within the stream. Each time there is a change in an entities’ state, a new event is emitted. This is very representative of how our daily lives unfold, as a series of events. For business data governance, this “business narrative” is an advantage as it enables a log of all events that have occurred in the system to be kept for auditing or as a reference.
It is becoming more and more common for companies to need to explain their “data-derived” decisions, such as why a customer’s application for financing or insurance has been rejected. The log of immutable events that an event-driven architecture can provide, by implementing patterns like event sourcing, is a key component for this auditing. As mentioned earlier, the event log can be replayed, and this feature can be used to account for decisions or rectify a bug in a service that led to corrupted data.
Real-time event streams for data science
Event-driven architectures are particularly well suited to event streams and through this in-stream processing enables businesses to fast decision making, ones where milliseconds count. Event stream processing enables applications to respond to changing business solutions as they happen and make decisions based on all available current and historical data in real-time. Business logic within the application can now be applied to data in motion rather than needing to wait for the data to land somewhere and then do the analysis. This real-time analysis is good for issues like fraud detection, predictive analytics, tackling security threats on the fly, automating supply chains, and so on.
Accelerated path for machine learning and data science into production environments
Lastly, event-driven architectures provide an effective approach for accelerating the path for machine learning models from development to production. Deploying machine learning operations into production is currently one of the biggest challenges in this area.
Machine learning operations patterns that use an eventing backbone, such as the Rendezvous Architecture, allow for multiple models to be tested against data simultaneously and allow for the most appropriate model to be served at the right time. Models can consume business events and then broadcast results in real-time to another service that can choose which model to serve based on some set business criteria around speed, predicted accuracy, and so on.
Because models can be constantly tested and improved, this architecture allows for faster, iterative development that can quickly be deployed in production. Further, due to the immutability of events, the machine learning decision-making process is auditable.
Summary and next steps
This article has given an overview of some of the key reasons to adopt an event-driven approach to software development. Using event-driven architectures, it is possible to build a resilient microservice-based architecture that is truly decoupled, giving increased agility and flexibility to the development lifecycle. Having loose coupling between microservices is one of the key benefits of using this architecture type (especially for cloud-native applications), so it’s not surprising that the event-driven architecture is widely considered a best practice for microservices implementations.
For advancing analytical capabilities, event-driven architectures offer the opportunity for time-critical decision making through event stream processing and accelerate the pace of machine learning operations into production. We have also touched on how, as a business, it offers a robust source of truth of business events that is immutable and auditable.
How to get started with an event-driven architecture
Transforming an application to an event-driven architecture style clearly brings with it many advantages, solving some of the key problems that organizations are facing at this time, such as audibility, cost, and organizational flexibility. However, knowing where to start can often be the biggest challenge.
To see how you can build an event driven application, check out our reference architecture, resources, code patterns and field guides on the IBM Cloud Architecture Center.
Also, check out the new Accelerator for Event-driven Solutions, which includes a Reference Blueprint that lets you quickly move from design to deployment of an event-driven application that contains only sample code. Read more about this IBM Cloud Pak for Integration Accelerator in our “Design and deliver an event-driven, cloud-native application at lightning speed” tutorial.
Event-driven architecture and IBM
IBM offers products like IBM Event Streams which can be used to architect event-driven applications and systems. IBM Event Streams is an event-streaming platform, built on open-source Apache Kafka, that is designed to simplify the automation of mission critical workloads. Using IBM Event Streams, organizations can quickly deploy enterprise grade event-streaming technology.
Try for free Event Streams on IBM Cloud as a managed service, or deploy your own instance of Event Streams in IBM Cloud Pak for Integration on Red Hat OpenShift Container Platform. The latter adds on valuable capabilities to Apache Kafka including powerful ops tooling, a schema registry, award-winning user experience, and an extensive connector catalog to enable a connection to a wide range of core enterprise systems.
If you’re ready to dive in, try out this two-part tutorial to apply an event-driven architecture by building an event-driven Kafka-based Java application that uses the Reactive Messaging APIs.
Reactive systems and IBM
The added resiliency and scalability that event-driven architectures introduce also helps to reinforce qualities of reactive systems, and so this pattern is often used when implementing reactive applications. If you’re interested in learning more about what reactive systems are, you can download the free e-book “Reactive Systems Explained” or you can check out the “Getting started with Reactive Systems” article.
You can also explore how to transform your own applications to be more reactive by trying out our guide “Creating reactive Java microservices” on openliberty.io (our lightweight open source web application server). This guide introduces you to the MicroProfile Reactive Messaging specification. Alternatively, you can check out the “Reactive in Practice” tutorial series which documents the transformation of the Stock Trader application into a reactive system step by step.
If you’re keen to start building a reactive system but are also keen to use supported runtimes for your enterprise applications, we offer several options including reactive APIs from OpenLiberty and MicroProfile but also Vert.x and Quarkus through our Red Hat runtime offering. These are all offered as part of our IBM Cloud Pak for Applications offering.