Enterprises are increasingly cognizant of the benefits to building cloud-native solutions, and look to leverage the technology and associated methodologies to rapidly create and deliver new digital experiences. However, to realize the benefits of cloud-native development and to deliver faster, enterprises must overcome the challenges of retaining enterprise governance, and the cost of building new skills and transforming end-to-end development practices.
This article provides an overview of the technology preview enhancements to the Accelerators for Teams feature in IBM Cloud Pak for Applications, which delivers capabilities for building both RESTful and event-driven, cloud-native solutions. Hereafter these new capabilities are referred to simply as Accelerators.
Accelerators enable you to rapidly deliver cloud-native solutions, from idea to deployed in production. They remove complexity and friction for multi-disciplinary teams by enabling your developers, architects, and operations to codify and centrally manage decisions. Through this approach, your development teams are able to rapidly innovate with the confidence that they comply with your company’s unique operational, security, and technology standards.
Note: This article refers to actions related to disciplines such as operations. Depending on your organization, individuals and teams may have responsibilities across multiple disciplines, such as through the adoption of DevOps or DevSecOps.
Accelerators for Teams provides enterprise governance and increased productivity for multi-disciplinary teams by bringing together Application Stacks, integrated DevOps built on Red Hat OpenShift Pipelines, the Red Hat OpenShift Container Platform, and a choice of developer tools. The technology preview enhancements extend these capabilities with an approach for designing and building applications that are composed of multiple microservices and dependent services, and for deploying them through a GitOps workflow to OpenShift.
Through the technology preview for RESTful and event-driven solutions, architects can design a Solution Blueprint composed of connected microservices and services (a database, for example). From your design, Accelerators automatically generate all the required source code repositories in Git with scaffolded microservices that are continuously built in containers. The microservices are deployed to OpenShift through a GitOps workflow by using OpenShift Pipelines (built on the Tekton open source project). These deployed microservices are pre-configured to provide the following benefits:
- Service discovery: Connections are established to other microservices and to required services.
- Health checks: Microservices can be managed and automatically restarted as necessary.
- Observability: Labels are applied so that microservices show as part of an application in the OpenShift topology viewer. Prometheus metrics are exposed so that performance can be monitored and visualized through Grafana dashboards.
All of these benefits are realized before a developer writes a single line of code. What could previously take weeks of frustration (access requests, infrastructure setup, and application configuration) is now automated. To see this workflow in action, watch the following video.
The following sections describe the workflow in more detail. For a walkthrough of creating an application based on Reference Blueprints, read the tutorials for a REST-based application and an event-driven application.
Accelerate from idea to design
The IBM Garage helps companies transform their culture and practices, and co-creates solutions through Design Thinking workshops. It creates reference architectures by learning and adapting with each engagement, which are published through the IBM Cloud Architecture Center. These reference architectures showcase the unique advantages of design patterns, allowing architects to quickly understand the value that is offered and to learn best practices for implementing within their own organizations.
In Accelerators, these architectures are made available in a graphical tool for collaborative solution development. Solution Builder allows architects to work with business analysts to design a Solution Blueprint for a cloud-native application, by using the reference architectures as a starting point or by dragging and connecting components on a canvas.
This technology preview release includes several Reference Blueprints, components, and shared services.
An e-commerce scenario where users can search and buy products. It showcases RESTful microservices and includes a web interface that relies on Backend for Frontend (BFF) services to interact with the back-end data.
A simple demonstration that is based around ordering a coffee to help explore Reactive microservices. Users choose whether to order via a REST HTTP service or Apache Kafka-backed Reactive microservice.
A scenario that is built around shipping refrigerated containers to demonstrate Reactive microservices and Kafka as an event backbone.
A microservice that can be bound to other microservices or a database.
A microservice that can read and write messages to a Kafka topic to communicate between Reactive microservices, as well as be bound in the same way as a REST microservice.
A database that can be bound to from a REST or Reactive microservice.
Kafka topic A Kafka topic to which messages can be published to and consumed from by microservices.
The REST and Reactive microservices use Application Stacks with Starter Templates. Application Stacks are a feature of Accelerators for Teams that include runtimes and frameworks that are optimized for containerized microservices. The stacks allow architects to standardize on technology choices for their teams; ensuring that they are using supported and compliant technology, and are simplifying upgrades and maintenance. Learn more about developing with Application Stacks in the IBM Knowledge Center. With Accelerators for cloud-native solutions, you can use a default set of curated stacks that includes Open Liberty, Quarkus, Spring Boot, and Node.js with Express.
Kafka can be deployed through the Strimzi operator and configured as a shared service through the Blueprint Properties panel.
Accelerate from designed to provisioned
The Solution Blueprint becomes your Bill of Materials on which to accelerate provisioning in Git and deployment to OpenShift. When you click Generate in the Solution Builder, source code repositories are created and populated with scaffolded microservices and appropriate configuration to connect as designed in the Solution Blueprint.
The following image shows the Coffee Shop Reference Blueprint in Solution Builder.
The following image shows the corresponding repos that are created in GitHub as a result of clicking Generate.
A GitOps repo is created for each environment that is defined in your blueprint, which is currently a choice of development, staging, and production. Promotion through these environments is described later in this article. The structure contains all of the required configuration to deploy the various microservices and their dependent services, such as a Kafka, to OpenShift. By using Kustomize overlays, operations are able to tailor the deployments to each environment, such as the number of replica sets required for development or production.
By adopting GitOps, you gain the following benefits:
- A single source of truth for deployments.
- Separation of concerns between development and operations.
- The ability to configure at a solution level and a per-environment level.
- Change control for deployments.
- The ability to easily roll back or to quickly and reliably reproduce your cluster.
- Controlled promotion of changes through development, staging, and production environments.
Read more about the concept in the useful Guide to GitOps provided by Weaveworks.
Accelerate from coded to built and deployed with observability
Your developers can now clone the newly created repository for their microservice, where they find the Application Stack with a Starter Template. The following image is an example of a Reactive microservice based on Open Liberty.
The scaffolded application allows your developers to focus on code rather than configuration. They can add the business logic to the microservice by using their editor of choice and take advantage of the inner-loop development model that is provided with Cloud Pak for Applications to see live updates in a running container.
Webhooks can be configured on the microservice and GitOps repos so that when a developer commits changes, the webhook triggers a Tekton pipeline that runs on OpenShift Pipelines. The pipeline performs the following tasks:
- Builds the microservice in a container.
- Pushes the resulting image to a registry.
- Raises a pull request (PR) against the GitOps repo to update the deployment artifacts with the new microservice image.
Operations can then choose whether to merge the PR and if they do, the webhook on the GitOps repo triggers another pipeline to deploy the updated microservice. The process of raising PRs and giving you control over who can approve them is part of how GitOps enables the idea of governance.
Microservices are deployed by using a Kubernetes Operator, built from the Runtime Component Operator, which is enabled to use the Service Binding Operator from Red Hat. This deployment approach enables binding of microservices to operator-backed services, such as databases and Kafka. Service bindings provide service discovery and dynamic configuration between microservices and services that are deployed on Kubernetes. This provides portability of microservices across OpenShift environments and removes hardcoded configuration from microservices. It solves the significant challenge of automating deployment and binding microservices to their dependent services across Kubernetes environments. Note that each operator-backed service must be enabled for the Service Binding Operator.
Your Solution Blueprint contains configuration information for persistent services and integration software, which is stored in the GitOps repositories and deployed by using their Kubernetes Operator. For this technology preview, PostgresSQL is provided as a persistent service and Strimzi is provided for Kafka.
Promotion between development, staging, and production environments is controlled through GitOps. Operations can use the
services CLI, where the
promote command raises a PR with the necessary changes against the GitOps repository that represents the target environment. After they are approved, deploy pipelines can be triggered in the same way as previously described.
As mentioned at the beginning of this article, these microservices can be viewed in the OpenShift topology viewer as logical applications. The built-in health checks allow them to be managed and restarted as necessary by OpenShift, and their performance can be monitored and visualized through Prometheus metrics and Grafana dashboards.
With these capabilities, your developers can deliver faster and focus on their code because their starting point is a microservice that already deploys through continuous integration and continuous delivery (CI/CD) to OpenShift. With GitOps, your operations team benefits from a single source of truth and can configure and control deployments to OpenShift.
Containers and Kubernetes are becoming increasingly pervasive technologies across multiple industries, with the container platform becoming the next cloud platform for new solutions. As part of this transition, Git workflows for both developers and operations teams will become essential, requiring customers to combine Git for source code, DevOps for automation, and GitOps for Kubernetes configuration. The Accelerators technology preview brings these approaches together into an accelerated workflow that enables you to build innovative cloud-native solutions faster, from idea to production.
A successful software delivery project requires alignment across multiple disciplines such as development, operations, security, and compliance. Accelerators allow these multi-disciplinary teams to codify and centrally manage decisions, enable a shift-left, and empower developers to deliver faster with more confidence because they can focus on coding instead of configuration.
Visit the IBM Middleware User Community to provide feedback on the technology preview content.