Are you deploying applications on Kubernetes clusters and looking to simplify the lifecycle management process for your applications or their components? Or perhaps you are looking for more simple ways to manage deploying third-party applications on your cluster. Operators provide a Kubernetes-native way to manage applications and components and can make your life a lot easier. When you use an operator to manage applications or components on Kubernetes as a resource, you get more complete control of the lifecycle of the resource. You can use the same kind of declarative model that you use for Kubernetes resources for entire applications or services.
This tutorial builds on a fundamental understanding of Kubernetes and access to an OpenShift or upstream Kubernetes cluster. To use the IBM Cloud Operator to create services instances from the catalog, you need an IBM Cloud account and an installation of IBM Cloud CLI.
Allow about 30 minutes to read this tutorial and try out examples.
An introduction to operators
One of the reasons Kubernetes has been so successful as the leading container orchestration project is its extensibility. With Custom Resources developers can extend the Kubernetes API to manage resources beyond native objects such as pods and services. Furthermore, the Kubernetes Go Client provides a powerful library to write controllers for custom resources. Controllers implement closed-loop control logic that runs continuously to reconcile the desired state of a resource with the observed state.
Operators combine application-specific controllers and related custom resources that codify domain-specific knowledge to manage the lifecycle of a resource. The first set of operators initially focused on stateful services running in Kubernetes, but in recent years the scope of operators has become broader, and there is now a growing community building operators for a wide variety of use cases. For example, OperatorHub.io provides a catalog of community operators handling many different kinds of software and services.
There are many reasons why operators can be appealing. If you are already using Kubernetes to deploy and manage applications or larger solutions, operators provide a consistent resource model to define and manage all the different components of an application. For example, if an application needs an etcd database, you just need to install the etcd operator and create an
EtcdCluster custom resource. The etcd operator then takes care of deploying and managing the etcd cluster for the application, including day 2 operations such as backing up and restoring. Because operators rely on custom resources, which are Kubernetes API extensions, all the existing tools for Kubernetes work by default. There is no need to learn new tools or practices. You can use the same Kubernetes CLI (kubectl) to create, update, or delete pods and custom resources. Role-based access control (RBAC) and admission control operate the same way for custom resources.
But what about components of an application that live outside your cluster? Operators can help here too. For example, suppose you are writing an application that requires language translation. You could use a cloud-based service such as the Watson Language Translator. To use that service, you would need to use the IBM Cloud catalog or the command-line interface and provision the service from there, then create service credentials, and copy the credentials into a Kubernetes secret that can be easily accessed by your pod. There are typically several manual steps involved in this process, but an operator can automate these steps.
With an operator, you can create an instance of the translator service the same way you create any other resource in Kubernetes. There is no need to have out-of-band steps or scripts to create the external resources. You can just describe your application, including external dependencies, with a set of Kubernetes templates, and simply deploy the whole application and dependencies with
Furthermore, because operators work by continuously comparing the desired state and the current status and reconciling them, an operator can provide self-healing features and ensure that a service is restarted (or created again if it is not healthy or it is accidentally deleted).
The IBM Cloud catalog
This example of the Watson Language Translator is not a unique use case. It’s very common for applications that are infused with AI capabilities, whether adding capabilities around language processing, image classification and tagging, or conversational dialog, to use Watson services from the IBM Cloud catalog. Services from the catalog span a broad range of capabilities, including AI and machine learning, data storage and analytics, integration and messaging, to weather and the internet of things.
In order to use one of these services, you need to create a service instance by provisioning it from the catalog. The provisioning step is fundamentally an API call to IBM Cloud that requests an instance of a particular service type combined with the desired plan. You can select between free and paid versions of the service.
For an application running in Kubernetes, what is the benefit of using a component from the IBM Cloud catalog? For the case of services like Language Translator, or Weather Company APIs, these capabilities are only available today as a managed service.
For an open-source database like PostgreSQL, however, you could technically add a pod within the cluster for the database. But to do that, some sort of persistent storage needs to be bound to the pod, and the storage needs high-availability support and a plan for backing up. And then there is the issue of security and compliance. By choosing to self-manage the database, you have to complete the necessary steps to fully document the appropriate security compliance for the data stored in PostgreSQL. On the other hand, by using the IBM Cloud Database for PostgreSQL, persistent storage, high-availability, automated backups, and security compliance for HIPPA-Ready and SOC 1 type 1 and SOC 2 type 1 are all handled through the managed service.
Although deploying open-source databases as pods can reduce costs in development clusters, IBM Cloud catalog managed services are a choice that pays off for production deployments.
Configuring the Operator Lifecycle Manager
As mentioned earlier, OperatorHub.io provides a catalog of community operators. But how do operators get installed from the operator catalog to your cluster? And how is the operator’s lifecycle managed? The answer is simple: with another operator. The Operator Lifecycle Manager (OLM), introduces custom resources that define all the elements required for describing and managing the lifecycle of operators.
ClusterServiceVersion is a core resource that encapsulates all the metadata, including the description, version, the authors, the capabilities, dependencies, and all the information required to install and update the operator. OLM uses a local catalog for available operators, which gets synchronized with OperatorHub.io by yet another operator – the Marketplace Operator. The combination of OLM and the Marketplace Operator provides the capability to install operators and also to automatically receive and apply updates for each installed operator through over-the-air (OTA) updates. You can install OLM and Marketplace Operator in any upstream Kubernetes distribution, thus providing easy access to the growing collection of community operators in OperatorHub.io. You can skip he steps in this section if you are using OpenShift V4 because OLM and Marketplace Operator are running and configured by default.
Step 1. Install the latest release of OLM in your cluster
Run the following commands:
kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0/crds.yaml kubectl apply -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/0.10.0/olm.yaml
Step 2. Install the Marketplace Operator
Clone the project:
git clone https://github.com/operator-framework/operator-marketplace.git
Then run the following command:
kubectl apply -f operator-marketplace/deploy/upstream/
Step 3. Configure the Marketplace Operator namespace
OperatorGroup resource to the
marketplace-operators namespace. This step enables installing operators from
kubectl apply -f - <<END apiVersion: operators.coreos.com/v1alpha2 kind: OperatorGroup metadata: name: marketplace-operators namespace: marketplace END
Step 4. (optional) Install an operator
Armed with both the OLM and Marketplace operators, you can now easily install any operator from the OperatorHub.io catalog. For example, you can install the etcd operator with the following command:
kubectl create -f https://operatorhub.io/install/etcd.yaml
Configuring the IBM Cloud Operator
The IBM Cloud Operator provides a Kubernetes-native approach to provision and configure IBM Cloud services as part of a Kubernetes application. The operator provides two custom resources: Service and Binding. Service creates an instance of any service from the IBM Cloud catalog. Binding automates creating credentials for services and the corresponding secrets in Kubernetes. Before installing the operator from the
OperatorHub.io catalog, you need to complete a few steps. Because you are provisioning IBM Cloud services, you need an account on IBM Cloud and the IBM Cloud CLI.
Step 1. Log in to IBM Cloud
Using the IBM Cloud CLI, log in to your IBM Cloud account:
Select a target environment for Cloud Foundry resources with the following command:
ibmcloud target --cf
Check if your default resource group is set with the following command:
If the default resource group is not set, or if you need to use a different resource group, you can set it with the following command:
ibmcloud target -g <resource group name or ID>
Step 2. Configure an API key for the operator
Use the following script to generate a default configuration and secret with your IBM Cloud API key for the operator:
curl -sL https://raw.githubusercontent.com/IBM/cloud-operators/master/hack/config-operator.sh | bash
Step 3. Install the operator from the catalog with OLM
The catalog provides a URL for the resources to install for each operator. Install the IBM Cloud Operator with the following command:
kubectl create -f https://operatorhub.io/install/ibmcloud-operator.yaml
Step 4. Create an instance of a public cloud service
After the operator is installed, you can create an instance of a public cloud service (on IBM Cloud) using the following custom resource:
apiVersion: ibmcloud.ibm.com/v1alpha1 kind: Service metadata: name: myservice spec: plan: <PLAN> serviceClass: <SERVICE_CLASS>
To find the value for
<SERVICE_CLASS>, you can list the names of all IBM Cloud services with the following command:
ibmcloud catalog service-marketplace
After you find the
<SERVICE_CLASS> name, you can list the available plans to select a
<PLAN> with the following command:
ibmcloud catalog service <SERVICE_CLASS> | grep plan
For example, to create an instance of the Watson Translator Service, you can use the following custom resource (save this resource definition to a file named
apiVersion: ibmcloud.ibm.com/v1alpha1 kind: Service metadata: name: mytranslator spec: plan: lite serviceClass: language-translator
Also, you can create credentials and a Kubernetes secret for the service with the following binding resource (save this resource definition to a file named
apiVersion: ibmcloud.ibm.com/v1alpha1 kind: Binding metadata: name: binding-translator spec: serviceName: mytranslator secretName: translator-secret
Step 5. Create the service instance and binding
If you followed the suggested names for the service and binding resource files, use the following commands:
kubectl create -f mytranslator-service.yaml kubectl create -f mytranslator-binding.yaml
That’s it! When these resources are added, the service instance is created and the credentials for the service are bound as a Kubernetes secret in the current namespace. Even better, because the IBM Cloud Operator automatically uses the defaults for the specific context of your cloud account (such as resource group, region), your set of templates are portable and are easily deployed in different contexts. You can feed the templates to a DevOps pipeline or share with other developers in your organization who can bring up the entire application and dependencies with
In this tutorial, you learned about operators, just scratching the surface of the capabilities of the IBM Cloud Operator. To learn about more advanced features such as self-healing and the ability to link to existing services, see the IBM Cloud Operator project documentation.