Respond to messages and handle streams
Create autoscaling actions that process message streams
Serverless platforms, like IBM Cloud Functions powered by Apache OpenWhisk, provide a runtime that scales up or down automatically in response to demand. This results in overall lower cloud resource consumption and a better match between business value gained and the cost of the resources that are used. One of the key use cases for Cloud Functions is to execute logic efficiently in response to events, such as messages or streams of data.
This project shows how serverless, event-driven architectures execute code in response to messages or to handle streams of data records.
The application demonstrates two IBM Cloud Functions (based on Apache OpenWhisk) that read and write messages with IBM Message Hub (based on Apache Kafka). The use case demonstrates how actions work with data services and execute logic in response to message events.
One function, or action, is triggered by message streams of one or more data records. These records are piped to another action in a sequence (a way to link actions declaratively in a chain). The second action aggregates the message and posts a transformed summary message to another topic.
- The developer simulates a client publishing application and puts a new array of JSON objects onto an Apache Kafka topic.
- The message fires an event called a trigger that listens for messages sent to that topic.
- This trigger is mapped to the first action by a rule, which downloads and parses the message array.
- The message array is then sent to another action in a sequence to aggregate, or reduce, the data to a single message.
- The second action posts the new message to another Message Hub queue for another application to process.
Find the detailed steps for this pattern in the README.