Digital Developer Conference on Data and AI: Essential data science, machine learning, and AI skills and certification Register for free

Design and deliver an event-driven, cloud-native application at lightning speed

Introduction

For a software delivery project to be successful, alignment is required across multiple enterprise disciplines, such as development, operations, security, and compliance. The Accelerators for Teams feature in IBM Cloud Pak for Applications is designed to speed up the development of cloud-native applications by enabling multi-disciplinary teams to codify and centrally manage decisions, improving the end-to-end journey from a business problem to a production application. The article Introduction to accelerators for cloud-native solutions explains the full value proposition behind Accelerators for Teams and how you can leverage this innovative technology to expedite development.

This tutorial covers the new Accelerator for Event-driven Solutions, demonstrating how you can use one of the Reference Blueprints to quickly move from design to deployment of an event-driven application that contains only sample code. In addition, you will look at a fully-functioning application of the Reference Blueprint that you can clone and deploy to understand how this event-driven application successfully scales as workload is applied. Before starting the tutorial, let’s cover a few basics about event-driven architecture.

The advantages of an event-driven architecture

The term event-driven is used to describe a methodology that can be applied to software architecture design. It puts events at the heart of an ecosystem, with decoupled microservices producing and consuming events. Events represent a change in state and these form an immutable narrative of the business. Event-driven can be advantageous over traditional synchronous messaging systems for a number of reasons, including:

  • Resiliency: Services can fail, recover, and replay events.
  • Push versus pull: Clients can receive updates via push rather than having to poll.
  • Decoupling: The ability to make changes in one service with no impact on another.
  • Scalability: Services can be scaled to meet the needs of the application.
  • Elasticity: Services can scale up and down autonomously based on demand metrics like CPU and number of requests.

The Accelerator for Event-driven Solutions provides a Reference Blueprint for event-driven applications. This tutorial uses the Coffee Shop blueprint to show how you can use Accelerator technology to automatically generate all the backend repositories for the application and then deploy the sample code to your Red Hat OpenShift cluster. The blueprint requires a number of Kubernetes operators that enhance the journey to event-driven, which are installed by default with the IBM Cloud Pak for Applications technology preview:

  • Appsody Operator: An operator to run Appsody-developed applications on Kubernetes.
  • Strimzi Operator: An operator to run Apache Kafka on your Kubernetes cluster.
  • Service Binding Operator: An operator to allow easier binding of applications to backend services, such as databases, without having to manually set up secrets and configuration properties.

About the Coffee Shop Reference Blueprint

The Coffee Shop blueprint is an example scenario that simulates the workflow of a coffee shop domain. In this coffee shop, customers can order from a range of different hot beverages. Their orders are placed into a queue and fulfilled by a barista. As the shop gets busier with orders, more baristas can join in to fulfill the orders.

A customer creates an order via the Coffee Shop user interface (UI) or a REST endpoint and the order events are handled in two different ways, depending on the barista implementations:

  • HTTP Barista: This microservice is a simple REST endpoint. Customers make an HTTP POST request to submit orders. The orders UI will await a response that the beverage is ready. The style of messaging is a synchronous request and response.
  • Kafka Barista: This microservice uses asynchronous messaging. Orders from the UI are published to a Kafka event bus on the orders topic, which the Kafka barista subscribes to. When the order is fulfilled, the Kafka barista publishes an event to a queue topic to indicate that the coffee is ready. The UI subscribes to this queue topic.

The following diagram shows an overview of the scenario:

Detailed reference architecture diagram illustrating the Coffee Shop scenario architecture; `coffeeshop-ui microservice`, and the two different barista microservices. More information is provided in the text that follows.

The microservices in this scenario are coffeeshop-ui microservice (Open Liberty), HTTP-barista microservice (Open Liberty), and Kafka-barista microservice (Quarkus). The microservices are developed on Application Stacks and are containerized, ready to be deployed to a Kubernetes cluster.

Prerequisites

To work through this tutorial, you must have the following prerequisites in place:

Estimated time

Completing this tutorial should take about 60 minutes.

Steps

To demonstrate how the Accelerator for Event-driven Solutions can help speed up application development and deployment, the following tutorial steps walk you through an end-to-end workflow that uses the Coffee Shop Reference Blueprint.

Step 1: Create a GitHub organization for the Coffee Shop artifacts

  • In GitHub, click your GitHub profile picture, then click Settings.
  • Under Personal settings, click Organizations.
  • Click New Organization.
  • In the Organization name field, enter my-coffeeshop.
  • Click Create organization, then click Finish.

Step 2: Load the Accelerator Reference Blueprint for the Coffee Shop application

  • From your IBM Cloud Pak for Applications landing page, select Build Solution to launch the Solution Builder tool.
  • Load the Coffee Shop Reference Blueprint onto the canvas by clicking New Blueprint and choose (Ref) Coffee Shop from the list of available blueprints.
  • The application topology consists of three microservices, coffeshop-ui, barista-kafka, and barista-http, as shown in the following screen capture:

    Coffee Shop Reference Blueprint loaded onto Solution Builder canvas.

For this tutorial, you do not need to change any of the values, but you might do so if you adopted this blueprint for your own event-driven application. For example, you might choose a different application stack for your microservice.

Step 3: Add configuration details for GitHub

  • Click on Blueprint properties.
  • You can choose to set up separate GitOps repositories for development, staging, and production environments. In this example, you will uncheck the boxes for staging and production.

    Diagram of the Coffee Shop properties panel

  • Under Kafka Config, you can set your Kafka configuration properties such as number of Kafka replicas (the number of brokers you require in your cluster) and number of Zookeeper replicas. For this tutorial, leave them both with a value of 1.

  • Save your changes in the Blueprint Properties pane.

Step 4: Generate the repositories on GitHub

  • Generate a GitHub personal access token on GitHub that gives you full control of any repositories in your organization and copy it to your clipboard. For help, see Creating a personal access token for the command line.
  • In Solution Builder, click Generate.
  • Enter the URL for the GitHub organization in which you would like to generate the related repositories.
  • When asked for your GitHub credentials, enter your GitHub user ID and paste the token into the Git Token field.
  • Click Generate again.

As Solution Builder generates the repositories, an Execution window indicates progress. Running processes show as In Progress. When complete, processes show as Complete with a green check mark.

Diagram of the Execution window, which is explained in the surrounding text

Step 5: Check that all the GitHub repositories were generated successfully

Go to your GitHub organization to view the repositories. There should be four repositories: baristahttp, baristakafka, coffeeshopui, and gitops-dev.

Diagram of the GitHub organization with all the repositories that make up the application

These repositories contain the scaffolding for the application. The microservices repositories contain sample code that runs and the GitOps repository contains the configuration to deploy all of the microservices and establish the appropriate connections to the Kafka event bus. At this stage, the business logic to drive Coffee Shop application is missing. However, to demonstrate the end-to-end workflow, you will continue to deploy the application in its present state.

Step 6: Prepare your deployment environment

The application must run in its own namespace on your OpenShift cluster. Follow these steps:

  • Create the deployment namespace for the application. To create the coffeeshop-dev namespace on your OpenShift cluster, run the following command:

    oc create namespace coffeeshop-dev
    
  • Add the namespace to your Kabanero Custom Resource Definition (CRD).

    • Edit your CRD by running the following command:

      oc edit kabanero kabanero -n kabanero
      
    • Add the namespace to the targetNamespaces: array as shown in the following example:

      targetNamespaces:
      - kabanero
      - coffeeshop-dev
      
    • Save the file.

Step 7: Configure webhooks

Webhooks connect pull requests and merge events that occur at a GitHub repository to your pipelines. In this tutorial, you configure webhooks using the Tekton dashboard.

  • Generate a GitHub personal access token. You must generate a GitHub personal access token so that the pipelines can access your Git repositories:

    • Go to https://github.com/settings/tokens and click Generate new token.
    • In the Note field, add a short description. For example, webhook_token.
    • Under Select scopes, check the boxes for repo and admin:repo_hook, and click Generate token.
    • Copy the token to your clipboard.
  • Create secrets in your Tekton dashboard. Follow these steps to store your GitHub personal access token in a Kubernetes secret:

    • From your Tekton dashboard, select Secrets from the sidebar menu.
    • For Secret type, select Password and click Create.
    • In the Name field, enter gitops-token.
    • For Namespace, select kabanero from the drop-down list.
    • For Access To:, select Git Server from the drop-down list. Update the default value (https://github.com) if necessary.
    • In the Username field, enter the GitHub user name.
    • In the Password field, add the personal access token that you generated in the previous step.
    • Click Create.
    • Select kabanero-pipeline from the list of service accounts to patch and click Patch.
  • Create a webhook for each microservice repository. For each microservice repository, complete the following steps:

    • From your Tekton dashboard, select Webhooks from the sidebar menu and click Add Webhook. The Create Webhook pane opens.
    • Under Webhook Settings enter the following information:

      • Name: Choose a unique name for your webhook. For example, incorporate the name of the microservice repository so that you can distinguish between webhooks.
      • Repository URL: The URL of the GitHub repository.
      • Access token: Click the add (+) button and enter a name for this secret and the GitHub Access Token that you created earlier into the fields provided.
    • Under Target Pipeline Settings enter the following information:

      • Namespace: Select kabanero.
      • Pipeline: Select the build-push-promote-pl pipeline.
      • Service Account: Select the kabanero-pipeline service account.
      • Docker Registry: Add image-registry.openshift-image-registry.svc:5000/coffeeshop-dev. Alternatively, you can add your own Docker Hub registry (http://index.docker.io/<dockerhub-username>).
    • Click Create. The dashboard remembers the values that you add, which makes adding subsequent webhooks a simpler task.

  • Create a webhook for each GitOps repository in the organization. For each GitOps repository in your organization, complete the following steps:

    • From your Tekton dashboard, select Webhooks from the sidebar menu and click Add Webhook. The Create Webhook pane opens.
    • Under Webhook Settings enter the following information:

      • Name: Choose a unique name for your webhook. For example, incorporate the name of the microservice repository so that you can distinguish between webhooks.
      • Repository URL: The URL of the GitHub repository.
      • Access token: Click the add (+) button and enter a name for this secret and your GitHub Access Token that you created earlier in to the appropriate field.
    • Under Target Pipeline Settings enter the following information:

      • Namespace: Select kabanero.
      • Pipeline: Select the deploy-gitops-pl pipeline.
      • Service Account: Select the kabanero-pipeline service account.
      • Docker Registry: Enter anything, as this field is not used.
    • Click Create.

  • Validate your webhooks for each GitHub repository. To validate that your webhooks are correctly set up in GitHub, complete the following checks:

    • From the GitHub Settings tab of each repository, select Hooks to find the webhook you created.
    • If you find a green tick against your webhook, it is working.

Step 8: Connect the GitOps pipelines

The GitOps pipelines run tasks that drive a workflow between code repositories, GitOps repositories, and the target deployment environment.

build-push-promote-pl pipeline

When a pull request is merged at a GitHub code repository, the build-push-promote-pl pipeline runs tasks that process the following workflow:

  • Enforce the governance policy.
  • Build the container image.
  • Sign the image (optional).
  • Push the image to the image registry.
  • Scan the image.
  • Promote configuration changes to the configured GitOps repository.

To complete the setup for this pipeline, follow these steps:

  • Configure a ConfigMap in the Kabanero namespace.

    • Create a file called gitops-map.yaml with the following content:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: gitops-map
        namespace: kabanero
      data:
        gitops-repository-url: <gitops-repo-url>
        gitops-repository-type: ghe
        gitops-commit-user-name: <user_name>
        gitops-commit-user-email: <user_email>
      

      Where:

      • <gitops-repo-url> is the URL of the GitOps repository. For example, https://github.ibm.com/my-coffeeshop/gitops-dev.
      • <user_name> is the GitHub username to apply to the pull request.
      • <user_email> is the email address for the GitHub user identified by <user_name>.
    • Apply the file with the following command: oc apply -f gitops-map.yaml.

deploy-gitops-pl pipeline

When a pull request is merged at a GitOps repository, the deploy-gitops-pl pipeline triggers a deployment to the target environment, which updates the application on the cluster.

Step 9: Deploy the Coffee Shop application

To deploy the Coffee Shop application for the first time, you must build each microservice individually. Complete the following steps for each repository:

  • Create a pull request. Complete the following tasks from each repository:

    • On a new Git branch, edit the README.md and make a change.
    • Save the file.
    • Create a pull request to merge the branch to master.
    • Wait for the build-push-promote-pl pipeline to complete before you start the next step.
  • Merge the pull request. Check your pipelines dashboard to observe the build-push-promote-pl pipeline run. The pipeline completes the run by creating a pull request at the GitOps repository.

  • Merge the pull request at the GitOps repository. Check your pipelines dashboard to observe the deploy-gitops-pl pipeline run. When the pipeline run completes, check that the microservice is deployed to OpenShift by running the following command:

    oc get deployments -n coffeeshop-dev
    

    The output is similar to the following example, which shows that the barista-kafka, barista-http, and coffeeshop-ui microservices are available:

    NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
    kafka-entity-operator   1/1     1            1           28m
    kafka-kafka-exporter    1/1     1            1           28m
    barista-kafka           1/1     1            1           32m
    barista-http            1/1     1            1           34m
    coffeeshop-ui           1/1     1            1           36m
    

    Your end-to-end workflow is now fully enabled. When a code change from a developer is merged, the webhook on the code repository triggers the build-push-promote-pl pipeline. This pipeline runs a series of tasks that promote configuration changes in a pull request at the GitOps repository. When this pull request is merged, the webhook on the GitOps repository triggers the deploy-gitops-pl pipeline, which deploys the updates to the target deployment environment.

  • Check that CoffeeShop is running on your OpenShift cluster. In the OpenShift UI, under Developer > Topology, select the Coffee Shop project. When the deployment process is complete, the microservices are visible on the dashboard, as shown in the following screen capture:

    Diagram of the Coffee Shop application running on OpenShift

Congratulations! You successfully used the Accelerator for Event-driven Solutions to generate the skeleton application and deployed it to OpenShift. Each microservice is running on the cluster with the appropriate connection configuration in place. Health checking, liveness checking, and metrics are already built in, which allows OpenShift to manage and monitor the application. Any updates that are made in GitHub to the microservices or to the configuration of the overall application drive a continuous integration and continuous delivery (CI/CD) workflow to update the application in your deployment environment.

Steps to deploy the full Coffee Shop application

To see a working Coffee Shop application that allows you to order coffee and observe this event-driven solution in action, you can clone and deploy the demo code for this Solution Blueprint from the icpa-coffeeshop organization.

Step 1: Clone the Coffee Shop application

  • Create a unique GitHub organization for the application, for example, my-coffeeshop.
  • Clone all of the repositories into the organization.

Step 2: Create a namespace for the application

  • Log in to your OpenShift cluster by opening a terminal window and logging in with your token.
  • Create a namespace called coffeeshop-dev for the application on your OpenShift cluster by running the following command:

    oc create namespace coffeeshop-dev
    

Step 3: Create a Kafka cluster

Create a Kafka cluster by running the following command:

oc apply -f environments/coffeeshop-dev/apps/coffeeshop/base/kafka/kafka.yaml

Step 4: Deploy the Coffee Shop application

  • Deploy the three microservices coffeeshop-ui, barista-kafka, and barista-http:

    oc apply -k environments
    
  • In the OpenShift UI, under Developer > Topology, select the coffeeshop project. When the provisioning process completes, you should see the deployments running.

    Deployed Openshift deployment

Step 5: Order coffee

Now that your deployment is complete, you can start to order some hot beverages.

  • Click the coffeeshop-ui microservice and under Routes in the right hand panel, click the Location URL to take you to the UI so that you can start ordering coffee.
  • In the UI, enter a customer name and select the order method.

    UI for ordering coffee

  • Now select the beverage product and click Place Order.

    Choosing beverage in UI

The order goes onto the queue and you should see it appear in the queue section, as shown in the following diagram. At first, the state indicates that the order is IN_PROGRESS.

Coffee order in progress

As the barista fulfills the order, their name appears in the Prepared by column and the State gets updated to show the status of the order.

Coffee order done

Step 6: Order more coffee

As your coffee shop gets busier, the queue takes longer for the barista to work through. However, if you want to scale out your service so that more baristas can be operational and more coffee orders can be fulfilled at a faster pace, you can start additional Kafka barista services. The extra load is then shared between the Kafka barista services.

To achieve this, ensure that the number of partitions in the orders topic is more than or equal to the number of baristas.

Step 7: Delete deployment

To delete your deployment, delete the coffeshop namespace. This removes the deployment components. You can accomplish this directly from the OpenShift dashboard or by using the command line:

oc delete project coffeeshop

Summary

The Accelerator for Event-driven Solutions helps you design and implement cloud-native applications that are based on Reactive microservices. The Reference Blueprints provide a springboard to creating your own Solution Blueprint, which represents the Bill of Materials for your application. Then, the Accelerator uses the blueprint to generate the entire structure for your application in Git. This structure enables a Git workflow for developers and a GitOps workflow for container platform operations teams that gets you up and running in a fraction of the time it might take you to do so manually.

Although you still have to code the business logic for your application, the containerized microservices can be built and deployed immediately. Each microservice runs in a container with all the necessary runtime dependencies in place. Each microservice is pre-configured to connect to the correct microservices and event channel on the cluster. Health checking, liveness checking, and metrics are already built in, which allows OpenShift to manage and monitor the application.

From this point on, any updates that are made in GitHub to the microservices, or to the configuration of the overall application, drive a CI/CD workflow to update the application in your deployment environment.

If you cloned and deployed the demo Coffee Shop application in the second part of the tutorial (and ordered a few coffees), you will hopefully understand some of the benefits of event-driven application architectures. Compared to the REST-based barista, the Kafka barista is more loosely-coupled and can consume and produce messages asynchronously. The solution is scalable, elastic, and resilient.

To learn more about how to use the Accelerators and Reference Blueprints that are available in Cloud Pak for Applications, see the IBM Knowledge Center technology preview documentation.

To learn more about Reactive architectures, visit the IBM Garage Event-Driven Reference Architecture repository.