Deploy serverless functions that respond to messages and handle streams

Get the code

Summary

In many of today’s cloud-native applications, data is generated as huge volumes and is used to link highly distributed services. Apache Kafka provides a system to stream messages at scale, but the systems that receive those messages must be able to process and act on individual records as well.

With an event-driven architecture built on OpenWhisk, you can write functions that respond to messages from queues and execute logic to process or send data to other systems in a distributed architecture. And you pay only for the resources consumed by your analytics functions for the fractions of a second that they run. This approach gives you a tight match between transactions processed and cloud resources used.

This approach is the promise of an event-driven, serverless architecture for new cloud-native applications such as those that support high volume stream-based message processing. You can start processing messages at scale, not worrying about whether you’ll have enough servers manage the volume.

Description

This code pattern code is an example of how to deploy a reference architecture with Cloud Functions to execute code in response to messages or to handle streams of data records. No code runs until messages arrive through the Event Streams service (powered by Apache Kafka). When that happens, function instances are started and automatically scale to match the load needed to handle the stream of messages.

You deploy this reference architecture through the Cloud Functions user interface or by using command line tools on your own system.

If you haven’t already, sign up for an IBM Cloud account and go to the Cloud Functions dashboard to explore other reference architecture templates and download command line tools, if needed.

The following components are used in this code pattern:

The application deploys two IBM Cloud Functions (based on Apache OpenWhisk) that read from and write messages to IBM Event Streams (based on Apache Kafka). This demonstrates how to work with data services and execute logic in response to message events.

One function, or action, is triggered by message streams of one or more data records. These records are piped to another action in a sequence (a way to link actions declaratively in a chain). The second action aggregates the message and posts a transformed summary message to another topic.

When you use this code pattern, you learn the following skills:

  • Serverless computing: Deploy a reference architecture with Cloud Functions to run code in response to messages.
  • Kafka: Work with data services and execute logic in response to message events.

Flow

flow

  1. The developer simulates a client publishing application and puts a new array of JSON objects onto an Apache Kafka topic.
  2. The message fires an event called a trigger that listens for messages sent to that topic.
  3. The trigger is mapped to the first action by a rule, which downloads and parses the message array.
  4. The message array is then sent to another action in a sequence to aggregate, or reduce, the data to a single message.
  5. The second action posts the new message to another Event Streams queue for another application to process.

Instructions

Find detailed technical steps for this code pattern in the README.md file in the GitHub repository.

  1. Deploy through the Cloud Functions console user interface.
  2. Deploy using the wskdeploy command line tool.
  3. Use alternative deployment methods.