Digital Developer Conference: Hybrid Cloud 2021. On Sep 21, gain free hybrid cloud skills from experts and partners. Register now

Archived | Design and deliver a REST-based, cloud-native application at lightning speed

Archived content

Archive date: 2021-02-25

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.


For a software delivery project to be successful, alignment is required across multiple enterprise disciplines, such as development, operations, security, and compliance. The Accelerators for Teams feature in IBM Cloud Pak for Applications is designed to speed up the development of cloud-native applications by enabling multi-disciplinary teams to codify and centrally manage decisions, improving the end-to-end journey from a business problem to a production application. The article Introduction to accelerators for cloud-native solutions explains the full value proposition behind Accelerators for Teams and how you can leverage this innovative technology to expedite development.

This tutorial covers the new Accelerator for Cloud-native Apps, demonstrating how you can use one of the Reference Blueprints to quickly move from design to deployment of an app that contains only sample REST-based microservices. Before starting the tutorial, let’s cover a few basics.

The advantages of a cloud-native architecture

Cloud-native is an industry term that is used to describe applications that are architected, built, and optimized to run on the cloud. Cloud-native applications are typically implemented as loosely-coupled microservices, run in containers, and managed by an orchestration system such as Kubernetes. By nature, these applications are elastic, meaning that they can scale autonomously based on the demand that is placed on an application. They are also portable between different clouds. Cloud-native applications take advantage of modern software delivery practices such as continuous integration (CI), continuous delivery (CD), DevOps, and GitOps.

Here are some of the advantages of moving to a cloud-native architecture:

  • Loose coupling between microservices
  • Lightweight services with a small code base
  • Elasticity
  • Portability
  • Language agnostic
  • Integration with continuous integration and continuous delivery (CI/CD) systems

The IBM Cloud Architecture Center hosts a full reference architecture for cloud-native solutions, which contains some valuable resources for developing cloud-native applications.

The Accelerator for Cloud-native Apps, a technology preview in IBM Cloud Pak for Applications v4.2, provides an end-to-end workflow for developing REST-based microservice applications. REST-based microservices communicate with each other over HTTP, using APIs typically defined in an OpenAPI specification document.

By using the Accelerator for Cloud-native Apps, a solution architect can design a microservice application architecture comprised of REST-based microservices based on Open Liberty, Spring, Quarkus, and Node.js technologies. The architect can also specify how the microservices interact with each other within a complete topology. Then, skeleton Git repositories can be generated for each microservice, alongside a GitOps repository so that the complete application can be automatically generated, ready for deployment via continuous delivery pipelines.

The Accelerator for Cloud-native Apps provides a Reference Blueprint for the StoreFront application that implements a REST-based microservice scenario.

About the StoreFront Reference Blueprint

The StoreFront blueprint is based on the IBM Cloud reference architecture for BlueCompute, which is illustrated in the following diagram:

Detailed reference architecture diagram illustrating a Web Backend for Frontend (`webbff`) microservice connected to backend microservices, which are in turn, connected to backend microservices. More information is provided in the text that follows.

The blueprint represents an application for online shopping. Customers can browse through a catalog that contains a selection of antique computing devices and make a purchase.

You can learn more about BlueCompute in the IBM Cloud Architecture Center. See “Deploy a retail application on RedHat OpenShift“.


To work through this tutorial you must have the following pre-requisites in place:

Estimated time

Completing this tutorial should take about 30 minutes.


To demonstrate how the Accelerator for Cloud-native Apps can help speed up application development and deployment, the following tutorial steps walk you through an end-to-end workflow that uses the StoreFront Reference Blueprint.

Step 1: Create a GitHub organization for the StoreFront artifacts

  • In GitHub, click your GitHub profile picture, then click Settings.
  • Under Personal settings, click Organizations.
  • Click New Organization.
  • In the Organization name field, enter my-storefront.
  • Click Create organization, then click Finish.

Step 2: Load the Accelerator Reference Blueprint for the StoreFront application

  • From your IBM Cloud Pak for Applications landing page, click Build Solution to start the Solution Builder tool.
  • Load the StoreFront Reference Blueprint onto the canvas by clicking New Blueprint and choose (Ref) StoreFront from the list of available blueprints.

    The Solution Builder user interface, showing the **More Options** menu and the StoreFront application topology

    The screen capture also shows the application topology. The Web Backend for Frontend (webbff) microservice provides the user interface to the online store. This REST-based microservice binds to four other REST-based microservices that are responsible for querying customer records, items in the shopping catalog, order records, and inventory records. Each microservice has its own backend database.

  • Click each of the components in turn to discover the configuration settings:

    • The webbff component is configured to build a microservice from the Node.js Express application stack.
    • The Customer, Catalog, Order, and Inventory components are configured to build microservices from the Open Liberty application stack. Each microservice connects to a backend database.
    • All the backend databases run on PostgresSQL 11.

Step 3: Add configuration details for GitHub

  • From the menu bar, click Blueprint properties:

    Diagram of the StoreFront properties panel

  • Uncheck the boxes for adding GitOps staging and production environments.

  • Click Save to save your changes in the Blueprint Properties pane.

Step 4: Generate the repositories on GitHub

  • Generate a GitHub personal access token on GitHub that gives you full control of any repositories in your organization and copy it to your clipboard. For help, see “Creating a personal access token for the command line“.
  • In Solution Builder, click Generate.
  • Add the URL for your GitHub organization in the Git Properties field, as shown in the following diagram.
  • When asked for your GitHub credentials, enter your GitHub user ID and paste your Personal Access Token into the Git Token field.
  • Click Generate again.

As Solution Builder generates the repositories, an Execution window indicates progress. Running processes show as In Progress. When generation is complete, the processes show as Complete with a green check mark.

Diagram of the Execution Window, which  is explained in the surrounding text

Step 5: Check that all the GitHub repositories were generated successfully

Go to your GitHub organization to view the repositories.

Diagram of the GitHub organization with all the repositories that make up the application

A GitHub code repository is created for each microservice in the StoreFront application.

Each microservice repository contains an app-deploy.yaml file, which is the configuration file that is used by the Appsody Operator to deploy a project. You can see information about the application stack and the endpoints exposed for the microservice. The following section from the app-deploy.yaml file in the webbff repository identifies the webbff microservice as part of the StoreFront application, running on the nodejs-express stack, v 0.4.8.

labels: storefront webbff nodejs-express 0.4.8
name: webbff

A single GitOps repository, gitops-dev, is also created that contains configuration information for the development deployment environment.

In the gitops-dev repository, the environments/storefront-dev/env/base/namespace.yaml shows that the target namespace for the application is storefont-dev.

apiVersion: v1
kind: Namespace
  name: storefront-dev

The repositories contain the scaffolding for the application. The microservices repositories contain sample code that runs and the GitOps repository contains the configuration to deploy all of the microservices and establish the bindings to the PostgresSQL database. At this stage, the business logic to drive the StoreFront application is missing. However, to learn how to implement the end-to-end workflow, you can continue to deploy the application in its present state.

Step 6: Preparing your deployment environment

The application must run in its own namespace on your OpenShift cluster. Follow these steps:

  • Create the deployment namespace for the application. To create the storefront-dev namespace on your OpenShift cluster, run the following command:

    oc create namespace storefront-dev
  • Add the namespace to your Kabanero Custom Resource Definition (CRD).

    • Edit your CRD by running the following command:

      oc edit kabanero kabanero -n kabanero
    • Add the namespace to the targetNamespaces: array as shown in the following example:

      - kabanero
      - storefront-dev
    • Save the file.

Step 7: Configure webhooks

Webhooks connect pull requests and merge events that occur at a GitHub repository to your pipelines. In this tutorial, you configure webhooks using the Tekton dashboard.

  • Generate a GitHub personal access token. You must generate a GitHub personal access token so that the pipelines can access your Git repositories:

    • Go to and click Generate new token.
    • In the Note field, add a short description. For example, webhook_token.
    • Under Select scopes, check the boxes for repo and admin:repo_hook, and click Generate token.
    • Copy the token to your clipboard.
  • Create secrets in your Tekton dashboard. Follow these steps to store your GitHub personal access token in a Kubernetes secret:

    • From your Tekton dashboard, select Secrets from the sidebar menu.
    • For Secret type, select Password and click Create.
    • In the Name field, enter gitops-token.
    • For Namespace, select kabanero from the drop-down list.
    • For Access To:, select Git Server from the drop-down list. Update the default value ( if necessary.
    • In the Username field, enter the GitHub user name.
    • In the Password field, add the personal access token that you generated in the previous step.
    • Click Create.
    • Select kabanero-pipeline from the list of service accounts to patch and click Patch.
  • Create a webhook for each microservice repository. For each microservice repository, complete the following steps:

    • From your Tekton dashboard, select Webhooks from the sidebar menu and click Add Webhook. The Create Webhook pane opens.
    • Under Webhook Settings enter the following information:

      • Name: Choose a unique name for your webhook. For example, incorporate the name of the microservice repository so that you can distinguish between webhooks.
      • Repository URL: The URL of the GitHub repository.
      • Access token: Click the add (+) button and enter a name for this secret and the GitHub Access Token that you created earlier into the fields provided.
    • Under Target Pipeline Settings enter the following information:

      • Namespace: Select kabanero.
      • Pipeline: Select the build-push-promote-pl pipeline.
      • Service Account: Select the kabanero-pipeline service account.
      • Docker Registry: Add image-registry.openshift-image-registry.svc:5000/storefront-dev. Alternatively, you can add your own Docker Hub registry (<dockerhub-username>).
    • Click Create. The dashboard remembers the values that you add, which makes adding subsequent webhooks a simpler task.

  • Create a webhook for each GitOps repository in the organization. For each GitOps repository in your organization, complete the following steps:

    • From your Tekton dashboard, select Webhooks from the sidebar menu and click Add Webhook. The Create Webhook pane opens.
    • Under Webhook Settings enter the following information:

      • Name: Choose a unique name for your webhook. For example, incorporate the name of the microservice repository so that you can distinguish between webhooks.
      • Repository URL: The URL of the GitHub repository.
      • Access token: Click the add (+) button and enter a name for this secret and your GitHub Access Token that you created earlier in to the appropriate field.
    • Under Target Pipeline Settings enter the following information:

      • Namespace: Select kabanero.
      • Pipeline: Select the deploy-gitops-pl pipeline.
      • Service Account: Select the kabanero-pipeline service account.
      • Docker Registry: Enter anything, as this field is not used.
    • Click Create.

  • Validate your webhooks for each GitHub repository. To validate that your webhooks are correctly set up in GitHub, complete the following checks:

    • From the GitHub Settings tab of each repository, select Hooks to find the webhook you created.
    • If you find a green tick against your webhook, it is working.

Step 8: Connect the GitOps pipelines

The GitOps pipelines run tasks that drive a workflow between code repositories, GitOps repositories, and the target deployment environment.

build-push-promote-pl pipeline

When a pull request is merged at a GitHub code repository, the build-push-promote-pl pipeline runs tasks that process the following workflow:

  • Enforce the governance policy.
  • Build the container image.
  • Sign the image (optional).
  • Push the image to the image registry.
  • Scan the image.
  • Promote configuration changes to the configured GitOps repository.

To complete the setup for this pipeline, follow these steps:

  • Configure a ConfigMap in the Kabanero namespace.

    • Create a file called gitops-map.yaml with the following content:

      kind: ConfigMap
      apiVersion: v1
        name: gitops-map
        namespace: kabanero
        gitops-repository-url: <gitops-repo-url>
        gitops-repository-type: ghe
        gitops-commit-user-name: <user_name>
        gitops-commit-user-email: <user_email>


      • <gitops-repo-url> is the URL of the GitOps repository. For example,
      • <user_name> is the GitHub username to apply to the pull request.
      • <user_email> is the email address for the GitHub user identified by <user_name>.
    • Apply the file with the following command: oc apply -f gitops-map.yaml.

deploy-gitops-pl pipeline

When a pull request is merged at a GitOps repository, the deploy-gitops-pl pipeline pipeline triggers a deployment to the target environment, which updates the application on the cluster.

Step 9: Deploy the StoreFront application

To deploy the StoreFront application for the first time, you must build each microservice individually. Complete the following steps for each repository:

  • Create a pull request. Complete the following tasks from each repository:

    • On a new Git branch, edit the and make a change.
    • Save the file.
    • Create a pull request to merge the branch to master.
    • Wait for the build-push-promote-pl pipeline to complete before you start the next step.
  • Merge the pull request. Check your pipelines dashboard to observe the build-push-promote-pl pipeline run. The pipeline completes the run by creating a pull request at the GitOps repository.

  • Merge the pull request at the GitOps repository. Check your pipelines dashboard to observe the deploy-gitops-pl pipeline run. When the pipeline run completes, check that the microservice is deployed to OpenShift by running the following command:

    oc get deployments -n storefront-dev

    The output is similar to the following example, which shows that the customer, inventory and order microservices are available and their respective PostgresSQL databases:

    NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
    customer                 1/1     1            1           14d
    customerdb-postgresql    1/1     1            1           14d
    inventory                1/1     1            1           14d
    inventorydb-postgresql   1/1     1            1           14d
    order                    1/1     1            1           14d
    orderdb-postgresql       1/1     1            1           14d

    Your end-to-end workflow is now fully enabled. When a code change from a developer is merged, the webhook on the code repository triggers the build-push-promote-pl pipeline. This pipeline runs a series of tasks that promote configuration changes in a pull request at the GitOps repository. When this pull request is merged, the webhook on the GitOps repository triggers the deploy-gitops-pl pipeline, which deploys the updates to the target deployment environment.

  • Check that StoreFront is running on your OpenShift cluster.

    In the OpenShift UI, under Developer > Topology, select the StoreFront project. When the deployment process is complete, the microservices are visible on the dashboard, as shown in the following screen capture:

    Diagram of the StoreFront application running on OpenShift

Congratulations! You have successfully used the Accelerator for Cloud-native Apps to generate the skeleton application and deployed it to OpenShift.


The Accelerator for Cloud-native Apps helps you design and deliver REST-based application architectures at speed. The Reference Blueprints provide a springboard to creating your own Solution Blueprint, which represents the Bill of Materials for your application. Then, the accelerator uses the blueprint to generate the entire structure for your application in Git. This structure enables a Git workflow for developers and a GitOps workflow for container platform operations teams that gets you up and running in a fraction of the time it might take you to do so manually.

Although you still have to code the business logic for your application, the containerized microservices can be built and deployed immediately. Each microservice runs in a container with all the necessary runtime dependencies in place. Each microservice is pre-configured to connect to the correct microservices or services on the cluster. Health checking, liveness checking, and metrics are already built in, which allows OpenShift to manage and monitor the application.

From this point on, any updates that are made in GitHub to the microservices or to the configuration of the overall application, drive a CI/CD workflow to update the application in your deployment environment.

To learn more about how to use the Accelerators and Reference Blueprints that are available in Cloud Pak for Applications, see the IBM Knowledge Center technology preview documentation.

Next steps

Consider whether a reactive application architecture can also help you build your cloud-native apps to be more reactive, responsive, and resilient.