Create Kubernetes custom resource definitions

This tutorial explains Kubernetes CustomResourceDefinitions (also known as custom resource definitions or CRDs), why they are important, and how to create one with a basic example. You can also see examples of how open source projects like Knative construct their resources by leveraging CustomResourceDefinitions.

Prerequisites

Before you walk through this tutorial, install the following tools:

  • Golang, the language Knativeis built in (1.13 or later)
  • kubectl, for managing development environments.
  • ko, for building, publishing, and running your images in development.
  • dep, for managing external Go dependencies.
  • (optional)IntelliJ IDEA, an integrated development environment that you can use to view and develop your source code.

In addition, you need to set up a Kubernetes cluster as your environment to run and test the custom resource definition. You can use a Kubernetes service from any major cloud provider. For example, if you use the IBM Cloud Kubernetes Service, set up your connection based on the guidance on the website. If you choose to use a local Kubernetes cluster on your own machine, depending on your operating system, you can select Minikube or Docker Desktop.

Estimated time

Based on your familiarity with Kubernetes, it might take 5 to 10 minutes for you to successfully create and test your Kubernetes custom resource definition.

Steps

You have probably heard of cloud-native or Kubernetes-native applications. Now we can say goodbye to all the headache integration or portability work that we used to do to bring our applications into a new cloud platform. Kubernetes CustomResourceDefinition (CRD) is extension point that you can use to implement your own application the Kubernetes-native way, and you can trust it.

The following steps show a basic example, walking you through a procedure to create the CRD. The example in this tutorial is an app that shows how the Solar System was created. The Knative common package is the base library to build this application, with some Knative built-in scripts to generate the source code if necessary.

All the source code can be found in the houshengbo/solar-system GitHub repo.

Initialize your project.

Download the following project. Make sure you have configured your $GOPATH, and create a directory locally to save the project. Run the following commands:

mkdir –p $GOPATH/src/my.dev
cd $GOPATH/src/my.dev

Download the source code with Git:

git clone git@github.com:houshengbo/solar-system.git
cd solar-system

A directory called solar-system, is saved at $GOPATH/src/my.dev/solar-system. This project is built in the same structure as every Knative project. It consists of the following directories and files:

  • config/: This directory is where to save all the YAML files, including the definition of custom resource definitions, namespaces, cluster roles, service accounts, cluster role bindings, deployments, etc.
  • hack/: This directory hosts the scripts used to generate the dependencies and the source code of of deep copy, client, informer and lister, based on your resource type.
  • vendor/: This directory saves all the dependencies this project relies on.
  • Gopkg.toml: This file is initially generated by dep init, used to define the packages this project depends on.
  • cmd/: This directory is used to save the main functions to launch the application, and the service this application uses.
  • pkg/: This directory is where to save all the source code.

This tutorial does not explain each file or directory. Instead, it focuses on the part of custom resource definition and custom controller.

Define the CRD.

The CustomResourceDefinition is defined as in the config/300-star-crd.yaml file. As you know, the sun, a G type star with the source of heat and light, is the located in the center of our solar system, but also in the galaxy as its parent system. If you want to create the sun, you need to create the definition of what star is. The name star sounds good to be the CRD name, and also the kind name. Create it with the following content:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: stars.example.crd.com
spec:
  group: example.crd.com
  scope: Namespaced
  names:
    kind: Star
    listKind: StarList
    plural: stars
    singular: star
  subresources:
    status: {}
  validation:
    openAPIV3Schema:
      required: ["spec"]
      properties:
        spec:
          required: ["type","location"]
          properties:
            type:
              type: "string"
              minimum: 1
            location:
              type: "string"
              minimum: 1
  versions:
    - name: v1alpha1
      served: true
      storage: true

Within this file, create a custom resource definition for the kind Star. It is more complicated to define all the properties to describe a star, but keep it simpler for this beginner tutorial. Now it is time to go through all the available properties for your CRD:

The apiVersion key specifies which version of the Kubernetes API you’re using to create this object. To create a new CRD, we use “apiextensions.k8s.io/v1beta1” as the value.

The kind key specifies what kind of object you want to create. As we are about to create a CRD, we put “CustomResourceDefinition” as the value.

The metadata key is used to define the data that can uniquely identify the object. In the example, in this tutorial, you define a name to identify the object, which is the combination of spec.names.plural and spec.group. These two keys are explained in the following context.

The spec key is used to define the desired state of this object. The following keys are available under spec:

  • group: this key is used to specify the name of the group of this object.
  • scope: this key determine the scope, that this object can function. There are two types of scope you can define: cluster and namspaced. If you want to manage all your resource under a certain namespace, and all of them will removed if you delete the namespace, you can choose namespaced. If you want your resource able to run in a cluster scope, which means it can only be instantiated once in one cluster, you can choose cluster.
  • names: we use this section to define all the forms of the names for this object. The singular key determines the singular name in lowercase. The plural key determines the plural form in lowercase. The kind defines the new kind name in uppercase for this object in the cluster. The listKind defines the list of the kind in uppercase.
  • subresource: this key describes the subresources for custom resources. Custom resources support two subresources: status and scale. We enable the status subresource in our example, so that the status of this custom resource can be accessed separately without changing rest information of the custom resource. The status is assigned an empty object, which means there is no property under status any more. However, to comply with the YAML format, we have to give it this empty value.
  • validation: this key is used to define the mandatory properties of the custom resource and the rules, each property need to comply with. As in our example, we require that every custom resource to be created need to have spec section, and type and location both as string fields, under this spec section.
  • versions: this key is used to define the available versions of this object. This section consists of list of name, serve and storage. We could have multiple versions supported at the same time. The name key specifies the name of the version. The serve key specifies whether this version is still enabled in the cluster. The storage key specifies whether this version is saved in the cluster, since the cluster can save only one version.

The API group you specify for this CRD is example.crd.com, which means you can issue the get, list, create, update, and delete commands to access this custom resource under the API group example.crd.com.

To register this CRD with the Kubernetes cluster, run the following command:

kubectl apply -f config/300-star-crd.yaml

It is possible to define your CRD with in your source code (for example, in Golang), but it’s better to define it in an external file, because you do not need to give the permission to your code to create or delete the CRD.

To check the CRD you just created, run the following command:

kubectl describe crd stars.example.crd.com

After the CRD is created, you see output like the following example:

Example CRD output

Create the custom resource with YAML file

After the CRD is created, it is time to create our custom resource. Since we define the scope of our CRD as Namespaced, we need to create a namespace, by running the following command:

kubectl apply -f config/100-namespace.yaml

A namespace called solar-examples is created.

Then run the following command to create the custom resource:

kubectl apply -f config/crs/sun-cr.yaml

Review the following content of YAML file for the custom resource:

apiVersion: "example.crd.com/v1alpha1"
kind: "Star"
metadata:
  name: "sun"
  namespace: solar-examples
spec:
  type: "G"
  location: "Galaxy"

As you see, the fields type and location are mandatory and validated by Kubernetes as part of the creation process.

To check the created custom resource, run the following command:

kubectl get star sun -n solar-examples -o yaml

The output is similar to the following example:

Example created custom resource output

Understand the type defined in the application

The CRD and the custom resource can be brought into the Kubernetes cluster, but to bring them into our application, we need to define the golang custom resources that are based on the CRD. You can access the file at pkg/apis/solar/v1alpha1/star_types.go for example. The structure of the golang type matches the one as we defined in the CRD. Besides that, we add some tags, which will be used to generate other source code necessary for this project.

All the tags are added as comments:

// +genclient

This tag tells the library code-generator to create a client for this type. We use this tag to generate the client code for our custom resource. This library code-generator is defined as a required library in Gopkg.toml, and it is available under the directory vendor. The script update-codegen.sh under the hack directory uses this library to generate the source code for you:

// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

The custom resource you defined is a top-level type, that needs to be used as runtime.Object, so you need to implement the function DeepCopyObject(). By adding this tag, client-gen generate this function for you. You can find this tag above both of the type Star and StarList, which means you treat both as runtime.Objects with implementation of the function DeepCopyObject().

Don’t overlook the pkg/apis/solar/v1alpha1/doc.go file, where you add the tag specifying the group name, and the pkg/apis/solar/v1alpha1/register.go file, where the type you defined is added into the scheme of the application.

To generate the source code that you need for this project, run the following script under the hack directory:

./hack/update-codegen.sh

All the source code for the client is saved under pkg/client. The code generated about deepcopy is saved in pkg/apis/solar/v1alpha1/zz_generated.deepcopy.go.

However, if you follow the previous steps to download this project, you do not need to generate it. All the source code has been included.

Understand the custom controller

A custom controller needs to be defined to manage each instance of the custom resource created, so that our application can implement the logic, when there is change happening to the custom resource.

The business logic is implemented in the reconciliation loop. Here is what is implemented for this example: When the sun is created as a custom resource, you create a deployment for the energy source and create a service of NodePort, so it can be accessed outside of the Kubernetes cluster, where you deploy the application.

First, see how to create the custom controller. Review the following content of the pkg/reconciler/solar/controller.go file:

Example controller.go file

As you can see, on the line 33 and the line 34, we register to monitor the changes to the custom resource (“Star”) and to any deployments it owns, since the star will cause the creation of the deployments. As in our example, we encapsulate the energy source as a deployment to be monitored, along with the star. Along the line 33, we make sure that the controller will handle all the events generated by the resource Star, including creation, deletion and update. Along the line 34, we make sure that the controller will handle all the events generated by the deployments, owned by the resource Star. The FilterFunc key helps us specify the condition used to filter the deployments. Here in our example, it determines that the deployment needs to have the reference to the combination: Group, Version and Kind of the resource Star. The crucial element for the custom controller is the reconciliation loop, which is defined in the Reconciler along the line 25. This Reconciler with the reconciliation loop, is defined in another file pkg/reconciler/solar/star.go. When we create the custom resource for the sun in solar system, the source of energy is created accordingly.

The function ReconcileKind is called by the following reconciliation loop:

// ReconcileKind implements Interface.ReconcileKind.
func (r *Reconciler) ReconcileKind(ctx context.Context, o *samplesv1alpha1.Star) reconciler.Event {
   if o.GetDeletionTimestamp() != nil {
      logger := logging.FromContext(ctx)
      logger.Info("The sun is removed with the source of energy.")
      // Check for a DeletionTimestamp.  If present, elide the normal reconcile logic.
      // When a controller needs finalizer handling, it would go here.
      return nil
   }
   o.Status.InitializeConditions()
   if err := r.reconcileDeployment(ctx, o); err != nil {
      return err
   }
   return nil
}

The function reconcileDeployment is called to create and check the deployment of the Star. The implementation of deployment creation is as below:
func (r *Reconciler) reconcileDeployment(ctx context.Context, star *samplesv1alpha1.Star) error {
   ns := star.Namespace
   deploymentName := "energy-source"
   logger := logging.FromContext(ctx).With(zap.String(logkey.Deployment, deploymentName))
   deployment, err := r.deploymentLister.Deployments(ns).Get(deploymentName)
   if apierrs.IsNotFound(err) {
      // Deployment does not exist. Create it.
      star.Status.MarkDeploymentUnavailable(deploymentName)
      dep := r.newDeployment(star, deploymentName)
      deployment, err = r.createDeployment(ctx, dep)
      if err != nil {
         return fmt.Errorf("failed to create deployment %q: %w", deploymentName, err)
      }
      logger.Infof("Created deployment %q", deploymentName)
   } else if err != nil {
      return fmt.Errorf("failed to get deployment %q: %w", deploymentName, err)
   } else if !metav1.IsControlledBy(deployment, star) {
      // Surface an error in the star's status, and return an error.
      star.Status.MarkDeploymentUnavailable(deploymentName)
      return fmt.Errorf("revision: %q does not own Deployment: %q", star.Name, deploymentName)
   } else {
      // The deployment exists, but make sure that it has the shape that we expect.
      deployment, err = r.checkDeployment(ctx, star, deployment)
      if err != nil {
         return fmt.Errorf("failed to update deployment %q: %w", deploymentName, err)
      }
      if _, err = r.createService(ctx, star, r.newService(star, deploymentName)); err != nil {
         return fmt.Errorf("failed to launch the service for the depliyment %q: %w", deploymentName, err)
      }
   }
   logger.Info("The sun is ready with the source of energy.")
   return nil
}

In the end of this function, we create a service for this deployment, so that we can access this service from outside of the cluster. Here is how we implement the service creation:
func (r *Reconciler) createService(ctx context.Context, star *samplesv1alpha1.Star,
   service *corev1.Service) (*corev1.Service, error) {
   ser, err := r.KubeClientSet.CoreV1().Services(service.GetNamespace()).Get(service.GetName(), metav1.GetOptions{})
   if err != nil {
      if apierrs.IsNotFound(err) {
         return r.KubeClientSet.CoreV1().Services(service.GetNamespace()).Create(service)
      }
      star.Status.MarkDeploymentUnavailable(ser.Name)
      return ser, err
   }
   star.Status.MarkStarReady()
   return ser, nil
}

During each step, you verify if the resource is correctly created with the correct status. If not, it returns an error and marks the custom resource with deployment unavailable. If all the resources reach the expected status, the custom resource is marked with READY status.

Run the application

Remove the custom resource for a clean start with the command:

kubectl delete Star sun -n solar-examples

Install the application with the following command:

ko apply -f config/

To check the status of your application, run the following command:

kubectl get pod -n solar-examples

You see a pod named controller-xxx-xxx in running status.

To check the log of this pod, run the following command:

kubectl -n solar-examples logs -f $(kubectl -n solar-examples get pods -l app=controller -o name)

You see a log message like the following example:

Example log message

This command makes sure the log is followed with the tag -f, so open another terminal to create our custom resource. At the same time, watch the log here.

In another terminal, go the HOME directory of the project, and run the following command to install the custom resource:

kubectl apply -f config/crs/sun-cr.yaml

If everything goes normal, you get the log window with the following content or similar:

Example log message

Check the pod again with the following command:

kubectl get pod -n solar-examples

You find one more pod named energy-source-xxx-xxx is also running. That means you created the deployment as expected.

Check whether you created the service so you can access it.

Run the following command:

kubectl get service -n solar-examples

You should see there is a service running like the following example:

Example service running

If you run your application locally, 127.0.0.1 is the IP used for this service. The EXTERNAL-IP column is left empty as shown in the example. However, if you run this application on a public Kubernetes service, the EXTERNAL-IP column shows the IP you can use. For example use <cluster-ip> to indicate the IP.

The PORT(S) column tells you the port number you can use. Since you want to access the service named energy-source outside the Kubernetes cluster, select the port number, after the colon. For the example here, use 32520. This number might vary when you run this application. Use the variable <port> to indicate it.

Run the following command to access the service:

curl http://<cluster-ip>:<port>

You get a message like the following example:

Example message

Summary

During the previous example, you can see the reconciliation loop creates a deployment of the energy source and expose it as a service. The CustomResourceDefinition defines what the custom resource looks like. When you create the custom resource, the custom controller calls the reconciliation loop. You create any other resources and monitor changing the status of the custom resource based on the current status of all resources.

If you want a more challenging read of a Kubernetes application, take a tour of the Knative project. Knative is an open-source Kubernetes-based platform to deploy and manage modern serverless workloads, by extending existing Kubernetes APIs, based on Kubernetes CRDs. There are four major custom resources in Knative Serving: Service, Configuration, Route and Revision. You can access the the config directory to see all the defined CRDs for Knative Serving. All the custom resources are registered with the Kubernetes API. You can find the schemas of all the custom resources in the serving/pkg/apis/ directory. You can find all the client libraries in the the knative/serving/pkg/client directory. Knative uses the same structure to manage the CRD. If go through all the steps in this tutorial, you can read the source code of Knative project.