IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Explore Tekton Pipelines and learn why a Kubernetes-native CI/CD server is best

Tekton is a powerful, yet flexible, Kubernetes-native open source framework for creating continuous integration and continuous delivery (CI/CD) systems. With Tekton, developers can build, test, and deploy across multiple cloud providers or on-premise systems by abstracting away the underlying implementation details.

A brief history

Originating from the Knative Build project, Tekton was separated into an independent open source project, under the Continuous Delivery Foundation, due to interest and popularity. Since then, IBM, Red Hat, and several other companies and developers have contributed to this project. For an in-depth overview of Tekton and its history, check out my accompanying video Cloud-native CI/CD with Tekton, Part 1: Tekton basics.

Tekton repositories

Tekton’s GitHub repository is a combination of various sub-sections of this project. The main repos are:

  • Pipeline
  • CLI
  • Dashboard
  • Triggers
  • Operator

This blog focuses on Tekton Pipelines, the backbone of the Tekton project, and dives into the differences between a traditional CI/CD server and a Kubernetes-native server.

Tekton Pipelines

The Tekton Pipelines project provides Kubernetes-style resources for declaring CI/CD-style pipelines. Tekton Pipelines run on Kubernetes (any Kubernetes), have Kubernetes clusters as a first-class type, and use containers as their building blocks.

Tekton Pipelines are decoupled, meaning one pipeline can be used to deploy to any Kubernetes cluster. The tasks which make up a pipeline can easily be run in isolation and resources like git repos can effortlessly be swapped between runs.

The concept of resources types in Tekton Pipelines means that for an image resource, implementations can easily be swapped out (for example, the option to build with kaniko versus BuildKit).

Building blocks

Tekton Pipelines uses CustomResourceDefinitions (CRDs) to create custom resources. Out of the box, Kubernetes comes with resources like pods, deployments, and services. CRDs extend the capability of Kubernetes by letting you define your resources and the Tekton controller acts on these Kubernetes resources. Below are some of the CRDs that come with Tekton:

  • Step: This is an existing CRD which is a Kubernetes container spec. In a step, you can specify the information on environment variables or volume mounts.
  • Task: The first new CRD is a task. A task is a sequence of steps that runs in the order declared and on the same node.
  • Pipeline: A pipeline is made up of tasks that can run either sequentially, concurrently, or as a graph. Tasks within a pipeline can run on different nodes and you can link the output of one pipeline as an input of another pipeline.
  • TaskRun, PipelineRun, and PipelineResource: TaskRun and PipelineRun are instances of the task/pipeline and fetch runtime configuration from the PipelineResource.

The image below illustrates how the above pieces fit together.

Tekton building blocks

A pipeline runs with the privileges of the specified service account. All pods execute a Tekton built-in credential initialization step as their first step in each task. The ServiceAccount is patched with secrets containing the necessary credentials. Tekton currently supports two Kubernetes secret types: basic-auth and ssh-auth.

Great! Now that you have a better understanding of Tekton Pipelines, you’re ready to learn about the benefits of a Kubernetes-native CI/CD server.

Kubernetes-native CI/CD servers versus traditional servers

As a developer, you may already be aware of just how many CI/CD tools are available. Of course, it’s great to have options. However, too many choices can lead to confusion and fragmentation. Just like developers, enterprise customers are having challenges making their own tooling decisions. To make decision-making a little easier, Tekton was born to provide a set of commonly agreed upon Kubernetes-native building blocks for CI/CD systems.

Traditional CI/CD servers work fine, but Kubernetes-native CI/CD servers come with a few more advantages. The table below summarizes the challenges around a traditional CI/CD server compared to a Kubernetes-native CI/CD server:

Traditional CI/CD server Kubernetes-native CI/CD server
Logging and monitoring is on an external agent Centralized logging and monitoring
High availability is not possible or too complex to achieve The platform guarantees high availability
Self-healing is not possible (retry logic for some tools) Self-healing comes by default (resources are pods}
Most common CI/CD servers are from a pre-container era Developed for containers and Kubernetes and runs as containers on Kubernetes

Now, let’s discuss the above points in depth. For ease of reference, I’ll use Jenkins as a traditional CI/CD server example and OpenShift Pipelines as the Kubernetes-native example (which uses Tekton under the hood).

  1. Logging/monitoring: In order to view your Jenkins logs, you’d need to access a Jenkins server. Although an integration can be made to fetch these logs, this adds overhead. This would also require a “Jenkins expert” on your team who could understand these logs and make the appropriate decisions.

    For OpenShift Pipelines, the resources are pods running in containers. All of these logs are centralized and are shown on the OpenShift Console:

    OpenShift Pipelines

  2. High availability: You can achieve high availability on Jenkins by using multiple Jenkins masters with HAProxy, open source software that provides a high availability load balancer and proxy server. However, you end up using more resources and the process is complex.

    For OpenShift Pipelines, the cluster itself can offer high availability if the minimum number of master nodes is chosen correctly.

  3. Self-healing: While Jenkins can implement retry logic for its failed stages, it is limited to scope and number.

    Kubernetes, OpenShift in this case, as a software, has self-healing baked in. This means all the resources run as pods and if one goes down, the platform automatically brings another one up to continue the task.

  4. Pre-container: Although Jenkins uses a number of plugins to provide cloud-native capabilities, the fact that you need a plug-in to build for a Docker environment is problematic. This is because traditional CI/CD servers, like Jenkins, were developed in a pre-container era. The design is a monolith which does not play well with a Kubernetes platform. On the contrary, OpenShift Pipelines, are developed for and run on Kubernetes.


Now that you know more about Tekton Pipelines and understand the benefits of a Kubernetes-native CI/CD server, you might want to gain a little hands-on experience. Be on the lookout for my upcoming tutorial for an opportunity to learn how to build a CI pipeline from GitHub to a Docker image registry.