2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

Make your solution run as a service and add it to service catalogs

Application services that only run on one platform lock the application into that platform, such as a single vendor’s public cloud. To avoid vendor lock-in, application architects and developers prefer hybrid cloud services that can run on a range of platforms, including on-premises.

Traditionally, virtual servers have been used as a platform-independent environment. Now, Kubernetes clusters are the top choice for creating a platform-independent environment that provides greater support for running a reliable, scalable, secure service. A Red Hat OpenShift cluster is better still, with its ability to run a service with greater security and with vendor support. Thus, a service that runs well in both Kubernetes and OpenShift offers the greatest flexibility.

This article is for independent software vendors (ISVs) who want to make their services easy to distribute to clients on a variety of platforms. Learn how to package your service to run well on either a Kubernetes or OpenShift cluster, including how to make your service self-managing. You will also see how to add your service to catalogs such as the Red Hat Marketplace to ensure that clients developing cloud-native applications can easily find your service, install it in their environment, and use it as part of their applications.

First, why OpenShift?

OpenShift is built on Kubernetes and runs on Red Hat Enterprise Linux (RHEL) technologies with additional features that make it valuable for hosting enterprise applications. OpenShift and RHEL include vendor support from Red Hat, with more explicitly defined SLAs and security features than the community support that comes with open source projects like Linux and Kubernetes. And containers designed for OpenShift include RHEL libraries that run a more fully integrated stack with the RHEL kernel than Kubernetes does.

And what’s the Red Hat Marketplace?

Red Hat Marketplace is essentially the app store for OpenShift. It offers a one-stop shop for trying and buying services that run on OpenShift.


To accomplish these tasks, you’ll want to use a current version of OpenShift 4.x.


The following image shows how you should progress through the series of learning path in order to accomplish your goal of turning your solution into a service on Kubernetes and adding it to servie catalogs:

image depicting the flow of content, from creating a well-designed container image, to certifying that image, adding security contexts to acess protected Linux functions, to developing and certifying an operator, and, finally, to publishing your service in a service catalog

To develop an enterprise-quality service that your clients can easily and reliably use, you should complete a number of tasks:

  1. Create a well-designed container image for deploying the service.
  2. Optionally, you can certify the container image.
  3. If the application needs access to protected Linux functionality, design a security context that provides the application the access it needs within the cluster.
  4. Develop an operator that installs new instances of the service and manages the running instances.
  5. Optionally, you can certify the operator.
  6. Publish your service in one or more service catalogs (optional). The premier catalog for OpenShift is Red Hat Marketplace. Red Hat Marketplace requires that the service has an operator, and that both the service’s container image and its operator are certified by Red Hat.

Let’s explore this process in greater detail.

1. Create a well-designed container image

First, to deploy any application or service in a container orchestrator, you need a container image that works well in both Kubernetes and OpenShift so that you have the flexibility to deploy it to either container orchestrator. It should be optimized for compactness, to run securely and to be easy to manage.

The learning path Design, build, and deploy universal application images shows you how to build such an image. It walks you through how to:

  • Design a high-quality image.
  • Build it to follow industry standards and avoid vendor lock-in.
  • Add features that make it easier for the cluster and the operations staff to manage.
  • Use an open source asset, the Cloud-Native Toolkit, and its build pipeline to automate building an image with all of the desired qualities.

If you want to build an image using the Cloud-Native Toolkit, check out the learning path: Build images with the Cloud-Native Toolkit.

2. Certify your container

Once your image is built, you can have Red Hat certify it. Certification assures clients who deploy the image that it is built to high-quality standards and will work well in OpenShift and Kubernetes.

Read the guide: Certify your container image with Red Hat container certification.

3. Add security contexts to your service’s deployment

A service often needs additional access to the Linux kernel, beyond what a cloud-native application usually requires. It may need access to files as specific users or groups, independent of the user and group the cluster uses to run the container. It may need to run restricted commands, or even run as the root user or a privileged user. The service must be able to declare these requirements in its deployment. The cluster administrator needs to be able to control which services are allowed which exceptions.

The learning path Get started with security context constraints on Red Hat OpenShift shows you how to manage an application’s access to protected parts of Linux. It shows how a container can use security contexts to define the access that it needs configured for its application, and how the cluster can use security context constraints to control the access that the containers are allowed to have. Using these together, you can control exactly what access a service can be assured it has within the cluster.

4. Develop an operator that automates managing the service

The less the cluster administrator has to manage the service, the easier it is to use. Ideally, the service should be self-managing — that is, able to install itself, upgrade when a new version is available, autoscale, repair itself, and more. A service can manage itself if it comes packaged with an operator that performs the management tasks.

The learning path Get started using Kubernetes Operators shows you how to develop an operator for your service. It explains how operators work in general, explores a specific simple example, and takes you through a running example that shows how to implement an increasingly sophisticated operator. With this, you’ll learn how to implement your own operator that makes your service self-managing.

5. Certify your operator

Once you build your operator, you can opt to have Red Hat certify it. Certification assures clients that the operator is built to high-quality standards and works well in both OpenShift and Kubernetes.

Read the guide: Certify your operator with Red Hat OpenShift Operator Certification

6. Publish your service to a catalog

Finally, you can publish your service packaged with its operator to one or more service catalogs. These catalogs make it easier for application architects to find your service, buy it, and add it to their application environment. The premier service catalog for OpenShift is the Red Hat Marketplace, which is essentially an app store for OpenShift. To publish a service there, it must have an operator, and the service’s image and operator must be certified by Red Hat.


With this process, your service will run well in Kubernetes and OpenShift, will do so securely, and will be able to manage itself to install itself and keep running well. As a next step, you can make your solutions into services and publish them in Red Hat Marketplace and other service catalogs.