The fast-evolving serverless architecture paradigm has been successfully adopted by various use case domains since its inception because of its easy deployment and management effort, for example. This article applies serverless architecture to the event-driven integration pattern between cloud-native application and legacy systems of record, which presents an interesting opportunity for cost optimization, efficiency, and faster time to market.

From the perspectives of enterprise architecture and cloud architecture, this article introduces an architecture pattern for an event-driven integration approach that is applied to an example use in the healthcare industry. It also discusses anti-patterns and points to sample code to jump-start using this approach.

Prerequisites

This article assumes that you have a basic understanding of serverless architecture.

Estimated time

It takes 15 minutes to read and appreciate the architecture pattern and the companion uses described in this article.

What to expect from serverless computing providers

As you are considering applying a serverless architecture to an integration between legacy apps and legacy systems of record, review the following capabilities of serverless providers:

  • You can write and manage function code without worrying about underlying infrastructure and platform provisioning.
  • Serverless platforms manage operational aspects such as deployment, autoscaling configuration, and availability.
  • You have different runtime choices.
  • Serverless providers include identity and access management for access controls for serverless functions.
  • You can choose from multiple modes of invocation, for example, event triggers, APIs, and messaging.
  • You can use environment variables, logging, and monitoring.
  • Pricing is based on function usage, and there are no separate costs for infrastructure as a service (IaaS) or platform as a service (PaaS).
  • Extended environment support includes databases, storage, and integration with other applications and services.

Characteristics of services that qualify for serverless computing

It is extremely important to evaluate application characteristics and ensure that it is aligned to Serverless architecture pattern. Following is the current view of what characteristics we look for in applications to make them ideal candidates for serverless implementation. We also list some of the anti-patterns that are not suitable for a serverless architecture.

Applications that qualify for a serverless architecture include the following characteristics:

  • Short-running stateless functions (seconds or minutes)
  • Seasonal workloads with varying off-peak and peaks
  • Production volumetric data that shows too much idle time
  • Event-based processing or asynchronous request processing for implementing use cases
  • Needs to simplify operations so that server maintenance is no longer a responsibility of a function provider organization (for example, the ability to automatic scaling without the need to configure the underlying PaaS)
  • Microservices that can be built as functions that are stateless

Anti-patterns for serverless computing

The following examples are reasons not to use a serverless architecuture with your applications:

  • An over-engineered design uses serverless functions by breaking down components to very fine-grained tasks that are not meaningful business functions.
  • A function requires a stateful session to implement the use case or requirements.
  • The serverless application architecture is defined as vendor dependent (and there is a potential for vendor lock-in, particularly involving platform capabilities like authentication, scaling, monitoring, or configuration management).

Emerging use cases for functions as a service (FaaS)

The intent of this section is not to provide an exhaustive list of use cases. However, each of the following use cases is a fast-emerging pattern for using serverless architecture in application design:

  1. The DevOps pipeline is emerging as one of the top candidates for adoption of serverless computing (for example, functions that address operational issues by taking corrective actions in response to an operational event). Development and test environments using serverless functions have made significant innovations toward optimal consumption of resources. Examples include functions that bring down development and test resources when no users are logged in and that bring relevant resources up when a user logs in (based on user profile and preferences).
  2. Applications act on events triggered by internal and external services or sources. Tasks are scheduled according to a specific time or event, such as trigger fraud analytics in response to a suspicious activity.
  3. An anomaly detected from sensor input triggers performing analytics.
  4. A database change triggers running application logic (for example, to enable change data capture on select datasets).
  5. Business events cause analytics to be performed and images to be processed. For example, as soon as image is generated, medical imaging analytics generate thumbnails for an image upload.

Use case: Event integration between cloud-native application and systems of record using serverless architecture

An application that enables physicians and care practitioners to create and update prescriptions are built on a legacy platform. However, prescription orders and refills are functions that need to be self-service functions for the end users of the application (the consumers or members). Note the following architecture decisions for this use case that might vary from customer to customer, based on an existing landscape and requirements.

Key architecture decisions

The application architect makes the following architecture decisions for this use case:

  • Retain legacy applications that provide capabilities for prescription management, used by physicians, care practitioners, and pharmacists. Avoid changes to these complex applications that are not used to serve consumers (members) directly.
  • Move the consumer-centric function of placing a prescription order into a modernized and highly scalable cloud-native application, which enables a higher degree of self service and agility.
  • Make a “Prescription Order” omni-channel application. All channels must make use of a modernized cloud-native service. Eventually it will be available for all types of users (consumers, care practitioners, and pharmacists).
  • Decoupling the prescription order service from a legacy application is a complex exercise. Use robust and proven techniques to analyze the legacy application. Employ assets that are powered by data science to analyze the current application and extract business rules that need to be migrated to the modernized service for prescription order management.

Reference architecture

The following reference architecture can be reused for a wider range of use cases that require integration between modernized digital services and legacy systems of record.

  • Event sources: ranging from legacy backend systems where updates to critical business data need to be monitored
  • Event producers: can listen to Event Sources and generate corresponding events
  • Event processing engine: can process events real-time in highly scalable manner with no overhead because it is available in a pay-per-use model
  • Event consumers: all systems (including microservices) that are affected by events
  • Downstream integration: additional integrations available, such as special event that require to trigger workflows

Architecture diagram of event integration between cloud-native application and systems of record

Example of use applied to the prescription use case

The following steps are an example of the prescription use case:

  1. The Change Data Capture (CDC) agent monitors any change to prescription data sets in the prescription source systems and captures change transmits to the Change Data Capture server.
  2. The Change Data Capture server gets change data from the agent and generates a Prescription Update Event, which is published on its topic ( for example Topic 1).
  3. Topic 1 is listened by the Event Processing Engine. The Event Ingestion Pipeline is initiated.
  4. The Prescription Update Event is transformed from a proprietary format to a standardized format and is pushed to the Output Topic.
  5. The Output Topic is listened to by a Consumer, which in this case is the Prescription Event processor.
  6. The Prescription Event Processor validates the update, converts the Prescription update into a JSON Object, and pushes it to the target database.
  7. Data access services on the target database make prescriptions data available to the Prescription Order Management cloud-native application.

Additional business value realized with this use case

As you can see, this use case realizes the following additional business value:

  • The legacy application for prescription management cannot handle thousands of requests per second. Data as a service for prescriptions can serve millions of requests per second.
  • Performance of accessing prescription information is drastically improved from the range of 7 to 15 seconds down to milliseconds, With latency of <5 seconds (latency should be aligned to an acceptable range).
  • There are scheduled and unscheduled downtimes for prescription backend applications. With this architecture, consumer-facing digital services continue to function without any affect, significantly increasing the availability for end users.

Reference code for this use case

This reference code uses the Event Streams service in IBM Cloud to implement the event topic that receives the prescription change event. IBM Cloud Functions is used to transform the prescription message in response to the prescription feed from the queue.

To walk through this example you need the following components:

  • Event Streams service
  • IBM Cloud Functions
  • The IBM Cloud Command Line Interface (CLI) and the Cloud Functions CLI plug-in downloaded to your local machine and configured.

  • Configure IBM Event Streams.

    First you provision an IBM Event Streams service instance. Log into IBM Cloud, provision an Event Streams instance, and name it healthcare-stream. On the Manage tab of the service dashboard, create a topic named prescription-topic. Set the corresponding names as environment variables in a terminal window:

     export KAFKA_INSTANCE="healthcare-stream"
     export KAFKA_TOPIC="prescription-topic"
    

    Then, create a package binding for the Event Streams service instance. Use the built-in Cloud Functions Kafka package, which contains a set of actions and feeds that integrate with both Apache Kafka and Event Streams (based on Kafka).With Cloud Functions, this package can be automatically configured with the credentials and connection information from the Event Streams instance you provisioned earlier. Make it available by refreshing your list of packages:

     # Ensures the IBM Event Streams credentials are available to Cloud Functions.
     ibmcloud fn package refresh
    
  • Attach a trigger to the Event Streams topic.

    Triggers can be explicitly fired by a user or fired on behalf of a user by an external event source, such as a feed. Use the following code to create a trigger to fire events when messages are received using the messageHubFeed provided in the Event Streams package:

     # Create trigger to fire events when messages (records) are received
     ibmcloud fn trigger create message-received-trigger \  
         --feed IBM Cloud_${KAFKA_INSTANCE}_Credentials-1/messageHubFeed \  
         --param topic "$KAFKA_TOPIC"
    
  • Implement and map the action handler.

    First, write the action handler. The handler can be coded using all the main programing languages, such as Java, Javascript, Python, Swift. This example uses Node.js. All the handlers have a main function, which is function that is run when the action is triggered.

    Create a file named process-message.js. This file defines an action written as a JavaScript function. It transforms the messages that are received from the prescription-topic topic. The following example expects a stream of messages that contain a prescription object.

     function main(params) {
    
       console.log(params);
    
       return new Promise(function(resolve, reject) {  
    
           var msgs = params.messages;
    
           var prescriptions = [];    // an array to store the transformed prescriptions
           for (var i = 0; i < msgs.length; i++) {
             var msg = msgs[i];
               var prescription = msg.value.prescription;
    
             // Transform the prescription
             Var transformedPrescription = transform(prescription);
    
             // Store the transformed prescription in the array
             prescriptions.push(transformedPrescription);
    
           }
    
           // return all the transformed prescriptions
           resolve({"prescriptions": prescriptions});  });
          };
     }
    

    Now, deploy a Cloud Function from the JavaScript file:

     ibmcloud fn action create process-message process-message.js
    

    Map the action to the trigger with a rule. Configure this action to be invoked in response to events fired by the message-received-trigger when messages are received on the prescription-topic topic.

    To do this, you create a rule, which maps triggers to actions. After the rule is created, the process-message action is run whenever the message-received-trigger is fired in response to new messages that are written to the event stream.

     ibmcloud fn rule create process-message-rule message-received-trigger process-message
    

    Enter data to fire a change. Begin streaming the Cloud Functions activation log in a second terminal window:

     ibmcloud fn activation poll
    

    Now send a message to Event Streams using the message producer action back in the original window:

     echo '{"prescription": {"patient": "John Smith", "drug": "Microzide"}}' > records.json
     DATA=$( base64 records.json )
    
     ibmcloud fn action invoke IBM Cloud_${KAFKA_INSTANCE}_Credentials-1/messageHubProduce \  
       --param topic $KAFKA_TOPIC \  
       --param value "$DATA" \  
       --param base64DecodeValue true
    

    View the log to look for the change notification. You should see activation records for the producing action, the rule, the trigger, and the consuming action.

Summary

This article discussed using serverless architecture for event-driven integration between cloud-native application and legacy systems of record. The next parts of this article series address IoT streaming, blockchain functions, and analytics use cases. Also, future parts compare the serverless pattern with other technologies of implementation, such as microservices and containers.