Tekton is an open source project to configure and run continuous integration (CI) and continuous delivery (CD) pipelines within a Kubernetes cluster. In this tutorial, I walk you through basic concepts used by Tekton Pipelines. Then, you get a chance to create a pipeline to build and deploy to a container registry. You also learn how to run the pipeline, check its status, and troubleshoot issues. But before you get started, you must set up a Kubernetes environment with Tekton installed.
Install Tekton in your cluster. This tutorial was written using Tekton version 0.11.1. Be advised that if you use an older version, you may encounter some functional differences.
Tekton provides a set of extensions to Kubernetes, in the form of Custom Resources, for defining pipelines.
The following diagram shows the resources used in this tutorial. The arrows depict references from one resource to another resource.
The resources are used as follows:
A PipelineRun defines an execution of a pipeline. It references the pipeline to run.
A pipeline defines the set of Tasks that compose a pipeline.
A Task defines a set of build steps, such as compiling code, running tests, and building and deploying images.
Don’t worry, I go into more detail about each resource throughout this tutorial.
Now it’s time to create a simple pipeline that:
Builds a Docker image from source files and pushes it to your private container registry
Deploys the image to your Kubernetes cluster
Clone the repository
You should clone this project to your workstation since you will need to edit some of the YAML files before applying them to your cluster. Make sure to check out the beta-update branch after cloning.
Let’s work from the bottom up. First, define the Task resources needed to build and deploy the image. Then, define the pipeline resource that references the Tasks. Finally, create the PipelineRun resource needed to run the pipeline.
Create a Task to clone the Git repository
The first thing that the pipeline needs is a Task to clone the Git repository that the pipeline is building. This is such a common function that you don't need to write this Task yourself. Tekton provides a library of reusable Tasks called the Tekton Catalog. Within the catalog, you can find a description of the git-clone Task. Below is what the Task should look like:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: git-clonespec:
workspaces:
- name: output
description: The git repo will be cloned onto the volume backing this workspace
params:
- name: url
description: git url to clonetype: string
- name: revision
description: git revision to checkout (branch, tag, sha, ref�)
type: string
default: master- name: submodules
description: defines if the resource should initialize and fetch the submodules
type: string
default: "true"
- name: depth
description: performs a shallow clonewhere only the most recent commit(s) will be fetched
type: string
default: "1"
- name: sslVerify
description: defines if http.sslVerify should be set to trueorfalsein the global git config
type: string
default: "true"
- name: subdirectory
description: subdirectory inside the "output" workspace to clonethe git repo into
type: string
default: "src"
- name: deleteExisting
description: clean out the contents of the repo's destination directory (if it already exists) before trying to clonethe repo there
type: string
default: "false"
results:
- name: commit
description: The precise commit SHA that was fetched by this Task
steps:
- name: cloneimage: gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:latest
script: |
CHECKOUT_DIR="$(workspaces.output.path)/$(params.subdirectory)"
cleandir() {
# Delete any existing contents of the repo directory if it exists.## We don't just "rm -rf $CHECKOUT_DIR" because $CHECKOUT_DIR might be "/"# or the root of a mounted volume.
if [[ -d "$CHECKOUT_DIR" ]] ; then
# Delete non-hidden files and directories
rm -rf "$CHECKOUT_DIR"/*
# Delete files and directories starting with . but excluding ..
rm -rf "$CHECKOUT_DIR"/.[!.]*
# Delete files and directories starting with .. plus any other character
rm -rf "$CHECKOUT_DIR"/..?*
fi
}
if [[ "$(params.deleteExisting)" == "true" ]] ; then
cleandir
fi
/ko-app/git-init \
-url "$(params.url)" \
-revision "$(params.revision)" \
-path "$CHECKOUT_DIR" \
-sslVerify="$(params.sslVerify)" \
-submodules="$(params.submodules)" \
-depth="$(params.depth)"
cd "$CHECKOUT_DIR"RESULT_SHA="$(git rev-parse HEAD | tr -d '\n')"EXIT_CODE="$?"
if [ "$EXIT_CODE" != 0 ]
then
exit $EXIT_CODE
fi
# Make sure we don't add a trailing newline to the result!
echo -n "$RESULT_SHA" > $(results.commit.path)
Show more
A Task can have one or more steps. Each step defines an image to run to perform the function of the step. This particular Task has one step that uses a Tekton-provided container to clone a Git repo.
A Task can also have parameters, which help to make it reusable. This Task accepts many parameters, including the URL of the Git repository to clone and the revision to check out.
Parameters can have default values provided by the Task or the values can be provided by the Pipeline and PipelineRun resources that you see later. Steps can reference parameter values by using the syntax $(params.name) where name is the name of the parameter. For example, this step uses $(params.url) to reference the url parameter value.
The Task requires a workspace where the clone is stored. From the point of view of the Task, a workspace provides a file system path where it can read or write data. Steps can reference the path using the syntax $(workspaces.name.path) where name is the name of the workspace.
Later, you learn how the workspace becomes associated with a storage volume.
Now apply the file to your cluster to create the Task:
Create a Task to build an image and push it to a container registry
The next function that the pipeline needs is a Task that builds a Docker image and pushes it to a container registry. The Tekton Catalog provides a kaniko Task which does this using Google's kaniko tool. The Task is reproduced below:
apiVersion:tekton.dev/v1beta1kind:Taskmetadata:name:kanikospec:params:-name: IMAGEdescription:Name (reference) of the image to build.-name: DOCKERFILEdescription:Path to the Dockerfile to build.default:./Dockerfile-name: CONTEXTdescription:The build context used by Kaniko.default:./-name: EXTRA_ARGSdefault:""-name: BUILDER_IMAGEdescription:The image on which builds will rundefault:gcr.io/kaniko-project/executor:latestworkspaces:-name: sourceresults:-name: IMAGE-DIGESTdescription:Digest of the image just built.steps:-name: build-and-pushworkingDir:$(workspaces.source.path)image:$(params.BUILDER_IMAGE)# specifying DOCKER_CONFIG is required to allow kaniko to detect docker credential# https://github.com/tektoncd/pipeline/pull/706env:-name: DOCKER_CONFIGvalue:/tekton/home/.dockercommand:-/kaniko/executor-$(params.EXTRA_ARGS)---dockerfile=$(params.DOCKERFILE)---context=$(workspaces.source.path)/$(params.CONTEXT) # The user does not need to care the workspace and the source.---destination=$(params.IMAGE)---oci-layout-path=$(workspaces.source.path)/image-digestsecurityContext:runAsUser:0-name: write-digestworkingDir:$(workspaces.source.path)image:gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/imagedigestexporter:v0.11.1# output of imagedigestexport [{"key":"digest","value":"sha256:eed29..660","resourceRef":{"name":"myrepo/myimage"}}]command:["/ko-app/imagedigestexporter"]args:--images=[{"name":"$(params.IMAGE)","type":"image","url":"$(params.IMAGE)","digest":"","OutputImageDir":"$(workspaces.source.path)/image-digest"}]--terminationMessagePath=image-digested-name: digest-to-resultsworkingDir:$(workspaces.source.path)image:stedolan/jqscript:|
cat image-digested | jq -j '.[0].value' | tee /tekton/results/IMAGE-DIGEST
Show more
You can see that this Task needs a workspace as well. This workspace has the source to build and the pipeline provides the same workspace that it used for the git-clone Task.
The kaniko Task also uses a feature called results. A result is a value produced by a Task which can then be used as a parameter value to other Tasks. This Task declares a result named IMAGE-DIGEST which it sets to the digest of the built image. A Task sets a result by writing it to a file named /tekton/results/name where name is the name of the result (in this case IMAGE-DIGEST). Later you learn how the pipeline uses this result.
You may be wondering about how the Task authenticates to the image repository for permission to push the image. This too will be covered later on in this tutorial.
Now apply the file to your cluster to create the Task:
Create a Task to deploy an image to a Kubernetes cluster
The final function that the pipeline needs is a Task that deploys a Docker image to a Kubernetes cluster. Below is a Tekton Task that does this:
apiVersion:tekton.dev/v1beta1kind:Taskmetadata:name:deploy-using-kubectlspec:workspaces:-name: git-sourcedescription:The git repoparams:-name: pathToYamlFiledescription:The path to the yaml file to deploy within the git source-name: imageUrldescription:Image name including repository-name: imageTagdescription:Image tagdefault:"latest"-name: imageDigestdescription:Digest of the image to be used.steps:-name: update-yamlimage:alpinecommand:["sed"]args:-"-i"-"-e"-"s;__IMAGE__;$(params.imageUrl):$(params.imageTag);g"-"-e"-"s;__DIGEST__;$(params.imageDigest);g"-"$(workspaces.git-source.path)/$(params.pathToYamlFile)"-name: run-kubectlimage:lachlanevenson/k8s-kubectlcommand:["kubectl"]args:-"apply"-"-f"-"$(workspaces.git-source.path)/$(params.pathToYamlFile)"
Show more
This Task has two steps.
The first step runs sed in an Alpine Linux container to update the YAML file used for deployment with the image that was built by the kaniko Task. This step requires the YAML file to have two character strings, __IMAGE__ and __DIGEST__, which are substituted with parameter values.
The second step runs kubectl using Lachlan Evenson's popular k8s-kubectl container image to apply the YAML file to the same cluster where the pipeline is running.
As was the case in the git-clone and kaniko Tasks, this Task makes use of parameters in order to make the Task as reusable as possible. It also needs the workspace to get the deployment YAML file.
Later in this tutorial, I address how the Task authenticates to the cluster for permission to apply the resource(s) in the YAML file.
Now apply the file to your cluster to create the Task:
Below is a Tekton Pipeline that runs the Tasks you defined above:
apiVersion:tekton.dev/v1beta1kind:Pipelinemetadata:name:build-and-deploy-pipelinespec:workspaces:-name: git-sourcedescription:The git repoparams:-name: gitUrldescription:Git repository url-name: gitRevisiondescription:Git revision to check outdefault:master-name: pathToContextdescription:The path to the build context, used by Kaniko - within the workspacedefault:src-name: pathToYamlFiledescription:The path to the yaml file to deploy within the git source-name: imageUrldescription:Image name including repository-name: imageTagdescription:Image tagdefault:"latest"tasks:-name: clone-repotaskRef:name:git-cloneworkspaces:-name: outputworkspace:git-sourceparams:-name: urlvalue:"$(params.gitUrl)"-name: revisionvalue:"$(params.gitRevision)"-name: subdirectoryvalue:"."-name: deleteExistingvalue:"true"-name: source-to-imagetaskRef:name:kanikorunAfter:-clone-repoworkspaces:-name: sourceworkspace:git-sourceparams:-name: CONTEXTvalue:$(params.pathToContext)-name: IMAGEvalue:$(params.imageUrl):$(params.imageTag)-name: deploy-to-clustertaskRef:name:deploy-using-kubectlworkspaces:-name: git-sourceworkspace:git-sourceparams:-name: pathToYamlFilevalue:$(params.pathToYamlFile)-name: imageUrlvalue:$(params.imageUrl)-name: imageTagvalue:$(params.imageTag)-name: imageDigestvalue:$(tasks.source-to-image.results.IMAGE-DIGEST)
Show more
A pipeline resource contains a list of Tasks to run. Each pipeline Task is assigned a name within the pipeline; here they are clone-repo, source-to-image, and deploy-using-kubectl.
The pipeline configures each Task through the Task's parameters. You can choose whether to expose a Task parameter as a pipeline parameter, set the value directly, or let the value
default inside the Task (if it's an optional parameter). For example, this pipeline exposes the CONTEXT parameter from the kaniko Task (under a different name, pathToContext), but does not expose the DOCKERFILE parameter, allowing it to default inside the Task.
This pipeline also shows how to take the result of one Task and pass it to another Task. Earlier, the kaniko Task produced a result named IMAGE-DIGEST that holds the digest of the built image. The pipeline passes that value to the deploy-using-kubectl Task by using the syntax $(tasks.source-to-image.results.IMAGE-DIGEST), where source-to-image is the name used in the pipeline to run the kaniko Task.
By default, Tekton assumes that pipeline Tasks can be executed concurrently. In this pipeline, each pipeline Task depends on the previous one, meaning they must be executed sequentially.
One way that dependencies between pipeline Tasks can be expressed is by using the runAfter key. It specifies that the Task must run after the given list of Tasks has completed. In this example, the pipeline specifies that the source-to-image pipeline Task must run after the clone-repo pipeline Task.
The deploy-using-kubectl pipeline Task must run after the source-to-image pipeline Task but it doesn't need to specify the runAfter key. This is because it references a Task result from the source-to-image pipeline Task and Tekton is smart enough to figure out that this means it must run after that Task.
Now apply the file to your cluster to create the pipeline:
Before running the pipeline, you need to set up a service account so that it can access protected resources. The ServiceAccount ties together a couple of secrets containing credentials for authentication, along with role-based access control (RBAC) related resources for permission to create and modify certain Kubernetes resources.
<REGISTRY> is the domain name of your container registry, such as us.icr.io (you can find out the domain name of your registry using the command ibmcloud cr region).
This secret will be used to both push and pull images from your registry.
Great! Now you can create the ServiceAccount using the following YAML:
This YAML creates the following Kubernetes resources:
A ServiceAccount named pipeline-account. The ServiceAccount references the ibm-registry-secret secret so that the pipeline can authenticate to your private container registry
when it pushes and pulls a container image.
A secret named kube-api-secret which contains an API credential (generated by Kubernetes) for accessing the Kubernetes API. This allows the pipeline to use kubectl to talk to your cluster.
A Role named pipeline-role and a RoleBinding named pipeline-role-binding. This provides the resource-based access control permissions needed for this pipeline to create and modify Kubernetes resources.
Now apply the file to your cluster to create the ServiceAccount and related resources:
kubectl apply -f tekton/pipeline-account.yaml
Show more
Create a PipelineRun
So far, you’ve defined reusable pipeline and Task resources for building and deploying an image. Now it’s time to look at how to run the pipeline. Below is a Tekton PipelineRun resource that runs the pipeline defined above. It should look something like this:
Although this file is small, there is a lot going on here. Let's break it down from top to bottom:
The PipelineRun does not have a fixed name. It uses generateName to generate a name each time it is created. This is because a particular PipelineRun resource executes the pipeline only once. If you want to run the pipeline again, you cannot modify an existing PipelineRun resource to request it to rerun. You must create a new PipelineRun resource. While you could use name to assign a unique name to your PipelineRun each time you create one, it is much easier to use generateName.
The pipeline resource is identified under the pipelineRef key.
Parameters exposed by the pipeline are set to specific values, such as the Git repository to clone, the image to build, and the YAML file to deploy. This example builds a Go program that calculates an approximation of Pi. The source includes a Dockerfile which runs tests, compiles the code, and builds an image for execution.
You must edit the picalc-pipeline-run.yaml file to substitute the values of <REGISTRY> and <NAMESPACE> with the information for your private container registry.
To find the value for <REGISTRY>, enter the command ibmcloud cr region.
To find the value for <NAMESPACE>, enter the command ibmcloud cr namespace-list.
The ServiceAccount named pipeline-account, which you created earlier, is specified to provide the credentials needed for the pipeline to run successfully.
The workspace used by the pipeline to clone the Git repository is mapped to a persistent volume claim which is a request for a storage volume.
Before you run the pipeline for the first time, you must create the persistent volume claim for the workspace:
kubectl create -f tekton/picalc-pipeline-pvc.yaml
Show more
The persistent volume claim requests Kubernetes to obtain a storage volume. Since each PipelineRun references the same claim and thus the same volume, the PipelineRun can only be run consecutively to avoid conflicting use of the volume. However, new functionality is being worked on to allow each PipelineRun to create its own persistent volume claim and thus use its own volume.
Before continuing, check to see that the persistent volume claim is bound:
$ kubectl get pvc picalc-source-pvc
NAMESTATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
picalc-source-pvc Bound pvc-662946bc-57f2-4ba5-982c-b0fa9db1d065 20Gi RWO ibmc-file-bronze 2m
Show more
Run the pipeline
All the pieces are in place to run the pipeline:
$ kubectl create -f tekton/run/picalc-pipeline-run.yaml
pipelinerun.tekton.dev/picalc-pr-c7hsb created
Show more
Note that you’re using kubectl create here instead of kubectl apply. As mentioned previously, a given PipelineRun resource can run a pipeline only once. This means you need to create a new one each time you want to run the pipeline. kubectl will respond with the generated name of the PipelineRun resource.
Let's use the tkn CLI to check the status of the PipelineRun. While you can check the status of the pipeline using the kubectl describe command, the tkn cli provides a much nicer output:
$ tkn pipelinerun describe picalc-pr-c7hsb
Name: picalc-pr-c7hsb
Namespace: default
Pipeline Ref: build-and-deploy-pipeline
Service Account: pipeline-account
Status
STARTED DURATION STATUS
2 minutes ago --- Running
Resources
No resources
Params
NAMEVALUE
gitUrl https://github.com/IBM/tekton-tutorial
pathToYamlFile kubernetes/picalc.yaml
imageUrl us.icr.io/gregd/picalc
imageTag 1.0
Taskruns
NAME TASK NAME STARTED DURATION STATUS
picalc-pr-c7hsb-source-to-image-s8rrg source-to-image 56 seconds ago --- Running
picalc-pr-c7hsb-clone-repo-pvbsk clone-repo 2 minutes ago 1 minute Succeeded
Show more
This tells you that the pipeline is running. The clone-repo pipeline Task was completely successfully and the source-to-image pipeline Task is currently running.
Continue to rerun the command to check the status. If the pipeline runs successfully, the description eventually should look like this:
$ tkn pipelinerun describe picalc-pr-c7hsb
Name: picalc-pr-c7hsb
Namespace: default
Pipeline Ref: build-and-deploy-pipeline
Service Account: pipeline-account
Status
STARTED DURATION STATUS
12 minutes ago 2 minutes Succeeded
Resources
No resources
Params
NAMEVALUE
gitUrl https://github.com/IBM/tekton-tutorial
pathToYamlFile kubernetes/picalc.yaml
imageUrl us.icr.io/gregd/picalc
imageTag 1.0
Taskruns
NAME TASK NAME STARTED DURATION STATUS
picalc-pr-c7hsb-deploy-to-cluster-mwvfs deploy-to-cluster9 minutes ago 10 seconds Succeeded
picalc-pr-c7hsb-source-to-image-s8rrg source-to-image 10 minutes ago 1 minute Succeeded
picalc-pr-c7hsb-clone-repo-pvbsk clone-repo 12 minutes ago 1 minute Succeeded
Show more
Check the status of the Kubernetes deployment. It should be ready.
$ kubectl get deploy picalc
NAME READY UP-TO-DATE AVAILABLE AGE
picalc 1/1119m
Show more
You can curl the application using its NodePort service. First, display the nodes and choose one of the node's external IP addresses. Then, display the service to get its NodePort.
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSIONINTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
10.221.22.11 Ready <none> 7d23h v1.16.8+IKS 10.221.22.11150.238.236.26 Ubuntu 18.04.4 LTS 4.15.0-96-generic containerd://1.3.310.221.22.49 Ready <none> 7d23h v1.16.8+IKS 10.221.22.49150.238.236.21 Ubuntu 18.04.4 LTS 4.15.0-96-generic containerd://1.3.3
$ kubectl get svc picalc
NAMETYPECLUSTER-IP EXTERNAL-IP PORT(S) AGE
picalc NodePort 172.21.199.71 <none> 8080:30925/TCP 9m
$ curl 150.238.236.26:30925?iterations=200000003.1415926036
Show more
Debug a failed PipelineRun
Let's take a look at what a PipelineRun failure would look like. To begin, edit the PipelineRun YAML and change the gitUrl parameter to a non-existent Git repository to force a failure.
Then, create a new PipelineRun and describe it after letting it run for a minute or two.
$ kubectl create -f tekton/picalc-pipeline-run.yaml
pipelinerun.tekton.dev/picalc-pr-sk7md created
$ tkn pipelinerun describe picalc-pr-sk7md
Name: picalc-pr-sk7md
Namespace: default
Pipeline Ref: build-and-deploy-pipeline
Service Account: pipeline-account
Status
STARTED DURATION STATUS
2 minutes ago 41 seconds Failed
Message
TaskRun picalc-pr-sk7md-clone-repo-8gs25 has failed ("step-clone" exited with code 1 (image: "gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init@sha256:bee98bfe6807e8f4e0a31b4e786fd1f7f459e653ed1a22b1a25999f33fa9134a"); for logs run: kubectl -n default logs picalc-pr-sk7md-clone-repo-8gs25-pod-v7fg8 -c step-clone)
Resources
No resources
Params
NAME VALUE
gitUrl https://github.com/IBM/tekton-tutorial-not-there
pathToYamlFile kubernetes/picalc.yaml
imageUrl us.icr.io/gregd/picalc
imageTag 1.0
Taskruns
NAME TASK NAME STARTED DURATION STATUS
picalc-pr-sk7md-clone-repo-8gs25 clone-repo 2 minutes ago 41 seconds Failed
Show more
The output tells you that the clone-repo pipeline Task failed. The Message also tells you how to get the logs from the pod which was used to run the Task:
for logs run: kubectl -n default logs picalc-pr-sk7md-clone-repo-8gs25-pod-v7fg8 -c step-clone
Show more
If you run that kubectl logs command, you see that there is a failure trying to fetch the non-existing Git repository. An even easier way to get the logs from a PipelineRun is to use the tkn CLI:
If you omit the -t flag, then the command will get the logs for all pipeline Tasks that executed. You can also get the logs for the last PipelineRun for a particular pipeline using this command:
tkn pipeline logs build-and-deploy-pipeline -L
Show more
You should delete a PipelineRun when you no longer have a need to reference its logs. Deleting the PipelineRun deletes the pods that were used to run the pipeline Tasks.
Summary
Tekton provides simple, easy to learn features for constructing CI/CD pipelines that run on Kubernetes. This tutorial covered the basics to get you started with building your own pipelines. There are more features available and many more planned for upcoming releases.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.