Introduction
Living on the cloud is a series dedicated to helping developers and operations learn how to build and run applications with a cloud-native mindset.
In the previous two tutorials of the series, you set up a Kubernetes cluster, deployed a Spring Boot application to it, and connected that application to a cloud-hosted database. So far, you performed those operations manually, but this represents a real problem when thinking about the scale at which enterprises operate. It’s relatively easy to deploy and configure a single application to a single region manually, but it’s very different when you are responsible for dozens of applications across multiple regions.
To address that challenge, this tutorial walks through the process of setting up an automated deployment pipeline. It focuses on the concepts of continuous integration and continuous delivery (CI/CD), the motivating factors behind the CI/CD practices, goals when adopting them, and how to implement them in a Kubernetes context. In this tutorial, you have three objectives:
- Construct an automated pipeline that builds and deploys your application to Kubernetes.
- Understand the technical details underpinning the Tekton tool that you will use to implement the automated pipeline.
- Understand the principles that motivate the drive towards creating automated deployment pipelines.
Hello, Tekton
There are a number of options for implementing an automated deployment pipeline, including Tekton. The Tekton framework is built on top of the Kubernetes API and runs on a Kubernetes cluster. Currently in beta development, Tekton provides a lot of flexibility and reusability when creating a deployment pipeline. This offers both benefits and drawbacks. The drawbacks are primarily related to its learning curve. However, once you set up your initial pipeline, I will walk through what each element is doing step-by-step.
Let’s first step through configuring your IBM Cloud account and Kubernetes cluster to execute a deployment pipeline that reads from a private GitLab instance, builds your project, and deploys it to your Kubernetes cluster. Note that you will take a brief step back in functionality with this tutorial. For now, your application will not connect to the database that you set up in the previous tutorial, Living on the cloud, Unit 2. A future tutorial in this series may cover how to configure a database connection in an automated deployment pipeline.
Working with Tekton on IBM Cloud
While Tekton can be managed entirely through the tkn
command-line interface (CLI), it may not always be the easiest way to get a complete view of your deployment infrastructure. To make Tekton easier to work with, IBM Cloud provides integration options with Tekton. Let’s configure your IBM Cloud account and Kubernetes cluster to work with Tekton.
Prerequisites
- An IBM Cloud account.
- The Kubernetes cluster that you set up in Unit 1 of this series.
- An open terminal and the Kubernetes CLI (
kubectl
) connected to your Kubernetes cluster. To connectkubectl
to your cluster, do the following:- Go the IBM Cloud Kubernetes Service clusters dashboard.
- Select your Kubernetes cluster from the list.
- Select Access from the menu and follow the steps on the page.
Estimated time
This tutorial will take you approximately one hour to complete.
Steps
Step 1. Set up a toolchain
The integration of IBM Cloud and Tekton is part of the IBM Cloud DevOps solution. This is done through a toolchain, which is a location for your organization or team to integrate code repositories, deployment pipelines, secret stores, monitor quality, and perform many other common software development and IT operations tasks.
To start building an automated pipeline, you must first create a toolchain, which can be done as follows:
- Sign in to IBM Cloud.
- Go to the IBM Cloud DevOps dashboard.
- From the Location list, select the region in which your Kubernetes cluster is located.
- Click Create Toolchain.
- Within the Other Templates section, select Build your own toolchain.
- In the Toolchain Name field, type
living-on-the-cloud-toolchain
. - Confirm that the location displayed in the Select Region field matches the region in which your Kubernetes cluster is located.
- The Select a resource group field should display as
Default
. - Click Create.
These steps created an empty toolchain. Next, you will add tools to it.
Step 2. Set up the Key Protect service
When deploying and configuring applications, you often need to interact with sensitive systems. API keys, certificates, passwords, and other similar sensitive information are required to interact with these systems. You need to have access to these keys, but also must ensure that they are stored and managed securely. IBM Cloud provides the Key Protect service to store secrets, which also integrates with the toolchain service. For this tutorial, you need to store keys as you set up the automated pipeline, so set up the Key Protect service as follows:
In a new browser tab, open the Key Protect service catalog page.
Note: The first 20 keys that you create within Key Protect are free, which are more than enough to complete this tutorial.
Confirm that the location displayed in the Select Region field matches the region in which your Kubernetes cluster is located.
In the Service name field, enter
living-on-the-cloud-keys
. Leave the remaining fields with their default settings.Click Create.
Note: Keep this browser tab open; you will return to it later in Step 5.
Return to the browser tab where you created your toolchain in Step 1. (Alternatively, go to the main Toolchains page, select the appropriate region from the Location list, and click on the name of the toolchain you created in Step 1 (
living-on-the-cloud-toolchain
) to open the toolchain page.)- Click the Add Tool button.
- In the Categories menu, select Secrets.
- Click Key Protect.
In the Name field, enter
living-on-the-cloud-key-protect
. The other fields on the page should be prefilled.Click the Create Integration button.
Step 3. Set up a private worker
To run a Tekton pipeline, you must configure a worker that can read and execute the Tekton definition files that you will bring in within a few moments. To establish a private worker:
- Click the Add Tool button.
Select the Delivery Pipeline Private Worker card.
In the Name field, type
living-on-the-cloud-tekton-worker
.- Click the New button located next to the Service ID dialog box. The Create a new Service ID API Key window opens with the Name field prefilled as
Service ID for living-on-the-cloud-toolchain
. - Select the Save this key in a secrets store for reuse checkbox to store it in the Key Protect service that you set up in Step 2. The Provider and Secret name fields will be prefilled with the appropriate information.
- Click Ok.
- Click the Copy to clipboard icon in the Service ID API Key dialog box, open a
.txt
file pad, and paste the value into that file. You will need this value in an upcoming step. - Click the Create Integration button.
- On your toolchain page, select the newly created Delivery Pipeline Private Worker card.
- From the side menu, click Getting Started to add private worker support.
- In the Service ID API Key dialog box, paste the value that you previously put in the
.txt
file. - In the Worker Name field, type
living-on-the-cloud-tekton-worker
. - Click the Generate button.
- A few code blocks will appear on your screen. Copy and execute the commands within your command terminal to set up a Tekton worker on your Kubernetes cluster.
Step 4. Add Git repositories
Next, you must add a few Git repositories. IBM Cloud has a privately hosted GitLab instance and you will use that to host three Git repos: the Git repo that contains the application code, and two Tekton catalogs. I created one of the catalogs for this tutorial and the other is from the IBM Cloud DevOps product development team. I’ll return to the subject of Tekton catalogs a bit later in this tutorial. For now, add the Git repos to your toolchain as follows:
- Return to your toolchain page. (If necessary, go to the main Toolchains page, select the appropriate region from the Location list, and select
living-on-the-cloud-toolchain
to open the toolchain page.) - Click the Add Tool button.
- Select the Git Repos and Issue Tracking card.
In the Repository type list, select Clone.
In the Source repository URL field, enter
https://github.com/IBM/living-on-the-cloud
.- Clear the Enable Issues checkbox.
- Click the Create Integration button.
- Repeat tasks 1 through 6 of this step for the following two Git repos:
- Living on the Cloud Tekton Catalog repo: https://github.com/IBM/living-on-the-cloud-tekton-catalog
- IBM Cloud DevOps Open-Toolchain Tekton Catalog repo: https://github.com/open-toolchain/tekton-catalog
Step 5. Create a Git access token and save Tokens to Key Store
- On your toolchain page, click Git Repos and Issue Tracking card.
From the main Git repo page, click the User icon on the header and select Settings from the drop-down menu.
In the User Settings menu, click Access Tokens.
- In the Name field, type a meaningful name for your application.
- In the Expires at field, select a future date for the token to expire.
- Select the read_api checkbox to grant read access to the API.
Click Create personal access token.
Copy the personal access token.
Open your terminal, copy the generated key in the following command, and execute the following command:
echo -n COPY_PERSONAL_ACCESS_TOKEN_HERE | base64
Copy the output of your command (the base64 encoded key material).
- Switch to your browser tab for the Key Protect service
- Click the Add Key button.
In the Add a new key pane, click the Import your own key radio button.
Select Standard key from the Key type drop-down list.
- In the Name field, type
living-on-the-cloud-git-access-token
. In the Key material field, paste the base64 encoded key material that you copied in task 8.
Click Import key.
Step 6. Set up a pipeline
- Return to your toolchain page. (If necessary, go to the main Toolchains page, select the appropriate region from the Location list, and select
living-on-the-cloud-toolchain
to open the toolchain page.) - Click the Add Tool button.
Select the Delivery Pipeline card.
In the Pipeline name field, type
storm-tracker-deployment-pipeline
.Select Tekton from the Pipeline type drop-down list.
Click Create Integration.
Step 7. Add the Tekton definitions
- On your toolchain page, select the Delivery Pipeline card.
- In the PipelineRuns panel, select Definitions (if it is not already selected).
- Click on the Add button.
- In the Definition Repository pane, select living-on-the-cloud from the Repository drop-down list.
- Select 3-automating-deployment from the Branch drop-down list.
- In the Path field, type
/start/storm-tracker
. - Click Add.
- From the Definitions page, click the Add button again.
- In the Definition Repository pane, select living-on-the-cloud-tekton-catalog from the Repository drop-down list.
- Select master from the Branch drop-down list.
- The Path field should be left blank.
- Click Add.
- From the Definitions page, click on the Add button again.
- In the Definition Repository pane, select tekton-catalog from the Repository drop-down list.
- Select master from the Branch drop-down list.
- In the Path field, type
/git
. - Click Add.
- On the Definitions page, confirm that the three repositories are listed in the table.
Click Save.
Step 8. Set the Tekton worker
- In the PipelineRuns panel of the Definitions page, select Worker.
- From the Worker drop-down list, select living-on-the-cloud-tekton-worker.
- Click Save.
Step 9. Define the trigger
- In the PipelineRuns panel of the Definitions page, select Triggers.
- Click Add Trigger and select Git Repository from the drop-down menu.
- Select living-on-the-cloud from the Repository drop-down list.
- Select the Branch radio button (it should be the default setting).
- Select 3-automating-deployment from the Branch drop-down list.
- Select the When a commit is pushed checkbox.
- Select gitlab-push-event-listener from the EventListener drop-down list.
Click Save.
Step 10. Define the environment properties
- In the PipelineRuns panel of the Definitions page, select Environment properties.
- Click Add and select Secure from the drop-down menu.
- In the Property Name dialog box, type
apikey
. - Click the Select a secret from a secrets store icon, which looks like a key.
- Select Key Protect: living-on-the-cloud-key-protect from the Provider drop-down list (it should be the default selection).
- Select living-on-the-cloud-toolchain from the Secret name drop-down list.
- Click Ok.
- Click Add and select Secure from the drop-down menu again.
- In the Property Name dialog box, type
git-access-token
. - Click the Select a secret from a secrets store icon, which looks like a key.
- Select Key Protect: living-on-the-cloud-key-protect from the Provider drop-down list (again, it should be the default selection).
- Select living-on-the-cloud-git-access-token from the Secret name drop-down list.
- Click Ok.
Click Save.
With all of that done, you can finally execute the pipeline.
Step 11. Execute the pipeline
As you set it up in the Triggers section, the pipeline will execute when a commit is pushed to the living-on-the-cloud
repository. Go to that repository and make a change to the README file, and commit this change.
Note: This change can be done directly through the browser editor that GitLab offers, or made locally and then pushed to the GitLab repo.
A few moments after you push a change to the living-on-the-cloud
repository, a pipeline run should start to execute. You can click on the pipeline run to view its progress.
The pipeline run should complete in less than 3 minutes.
You now have a successfully executing pipeline! Following is some more information to help you understand how it works.
Tekton explained
During this tutorial, you successfully set up a simple automated deployment pipeline using Tekton. However, you may be wondering what happens within this pipeline and how it is accomplished. Let’s take some time to step through the basic principles of how Tekton works.
A central architectural decision within Tekton is reusability. Reusability has obvious benefits, such as reading from a Git repo, building a Java artifact, and pushing an image to a container repository. These benefits vary little between projects, so being able to reuse those elements helps you to reduce maintenance. The downside of this reusability is that it adds abstraction layers that can make it a bit more difficult to understand the relationship between concepts. Even the relatively simple pipeline that you defined in this tutorial is made up of 8 separate files, and some of the files contain multiple Tekton resources.
To help you conceptualize the big picture of what is happening, the following flow diagram visually represents the pipeline that you created in this tutorial.
Following are detailed descriptions of each element in the flow diagram:
- gitlab-push-event-listener
- gitlab-push-trigger-binding
- project-trigger-template
- PipelineRun
- service-account
- project-pipeline
- Tasks
gitlab-push-event-listener
gitlab-push-event-listener
is where an automated build starts in your Tekton pipeline. gitlab-push-event-listener
defines an EventListener. In Tekton, an EventListener
acts as a sink by creating a pod on a Kubernetes cluster, which provides an addressable HTTP POST endpoint that accepts JSON messages. I will share more about how to read an incoming JSON message in a moment, but first let’s look at how the gitlab-push-event-listener
is defined:
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
name: gitlab-push-event-listener
spec:
serviceAccountName: service-account
triggers:
- name: git-push-event-trigger
template:
name: project-trigger-template
bindings:
- name: gitlab-push-trigger-binding
The first three fields are standard across all Kubernetes types:
apiVersion
defines the API version of the Kuberentes resource that you are using, which istriggers.tekton.dev/v1alpha1
. Tekton Triggers is a sub-project of the Tekton project that focuses on creating Kubernetes resources from events. As I write this tutorial, Tekton Triggers is noted by the version ofv1alpha1
, but the structure of files might differ whenever you are reading it.kind
is another common Kubernetes field that defines the type of resource, which isEventListener
here.name
is a part of the metadata that provides you with a unique identification for looking up this resource when you deploy on a Kubernetes cluster.
Within the spec
of EventListener
are the following fields:
serviceAccountName
is the name of theservice-account
to use when you create resources from the payload that is sent.triggers
is a list of triggers to activate when theEventListener
receives an event. In this case, the name of the trigger isgit-push-event-trigger
, which is usingproject-trigger-template
as theTriggerTemplate
andgitlab-push-trigger-binding
as theTriggerBinding
.
In addition, when you set up your trigger in Step 11, Tekton configured a webhook within the living-on-the-cloud
Git repo that you created in Step 4.
Learn more about EventListener
in the official API documentation.
gitlab-push-trigger-binding
An EventListener
sets up an addressable HTTP endpoint that can receive an event as a JSON payload. However, receiving a typical event is often not enough. Usually, you want to inspect the payload of the JSON message for information about how to act on that event. This is where a TriggerBinding
comes into play. A TriggerBinding
can be used to inspect a payload and pass the extracted values to a TriggerTemplate
, which I will explain in a moment. First, let’s look at the definition of gitlab-push-trigger-binding
:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
name: gitlab-push-trigger-binding
spec:
params:
- name: git-repo-url
value: $(event.repository.git_http_url)
- name: commit-id
value: $(event.checkout_sha)
params
are a list of parameters that will be passed on to aTriggerTemplate
.git-repo-url
retrieves the URL of theliving-on-the-cloud
GitLab repo. The$(event.repository.git_http_url)
value is the path to the field that contains the URL of the Git repo.checkout_sha
retrieves the specific commit to be cloned from the Git repo. As withgit-repo-url
, the$(event.checkout_sha)
value is the path to the field within the JSON body that is sent from the GitLab repo.
Learn more about TriggerBinding
in the official API documentation.
Note: In most examples and documentation of TriggerBinding
, the top level object is body
. However, on IBM Cloud, the top level object is event
. Keep this in mind when you reference other examples.
Learn more about reading a webhook message in the GitLab Docs and GitHub Developer Blog.
project-trigger-template
A TriggerTemplate
is a resource for creating other resources that are used by Tekton; typically, by a pipeline. Along with project-pipeline
, which defines the Tekton pipeline for your storm-tracker
application, the definition of project-trigger-template
is located in the tekton-build.yaml
file. This file is co-located in the living-on-the-cloud
application repo, which is a decision that I cover in more depth under the Assembly instructions included section later in this tutorial.
Let’s take a look at what is happening within the definition of project-trigger-template
:
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
name: project-trigger-template
spec:
params:
- name: path-to-context
description: The path to the build context, used by Kaniko
default: /start/storm-tracker
- name: path-to-deployment-file
description: The path to the YAML file describe how to deploy the application.
default: deployment.yaml
- name: path-to-dockerfile
description: The path to the docker image build file
default: Dockerfile
- name: api-url
description: The api url for interacting with ibm cloud
default: cloud.ibm.com
- name: container-repo-url
description: Base url for container repository
default: us.icr.io
- name: container-repo-namespace
description: Namespace where image is located
default: living-on-the-cloud
- name: deployment-image
description: Name of image to be deployed
default: storm-tracker
- name: name-of-cluster
description: The number of cluster to deploy the image to
default: living-on-the-cloud
- name: cluster-region
description: The region where the cluster resides
default: us-south
- name: cluster-namespace
description: The namespace being used within the k8s cluster
default: default
- name: deployment-image-placeholder
description: Placeholder value within deployment YAML to be replaced
default: IMAGE
- name: git-repo-url
description: URL to the Git repo to be cloned
- name: commit-id
description: The revision to build and deploy.
- name: git-access-token
description: The service account id the pipeline is run under
- name: apikey
description: Service Account API KEY for interacting with IBM Cloud note the specific syntax of apikey has special relevance to many IBM Cloud resources, so should not be changed.
resourcetemplates:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: $(params.deployment-image)-build-
spec:
serviceAccountName: service-account
pipelineRef:
name: project-pipeline
params:
- name: path-to-context
value: $(params.path-to-context)
- name: path-to-deployment-file
value: $(params.path-to-deployment-file)
- name: path-to-dockerfile
value: $(params.path-to-dockerfile)
- name: api-url
value: $(params.api-url)
- name: container-repo-url
value: $(params.container-repo-url)
- name: container-repo-namespace
value: $(params.container-repo-namespace)
- name: cluster-namespace
value: $(params.cluster-namespace)
- name: deployment-image
value: $(params.deployment-image)
- name: name-of-cluster
value: $(params.name-of-cluster)
- name: cluster-region
value: $(params.cluster-region)
- name: git-access-token
value: $(params.git-access-token)
- name: git-repo-url
value: $(params.git-repo-url)
- name: commit-id
value: $(params.commit-id)
- name: deployment-image-placeholder
value: $(params.deployment-image-placeholder)
workspaces:
- name: git-repo
persistentVolumeClaim:
claimName: $(uid)-pvc
- apiVersion: v1
stringData:
username: iamapikey
password: $(params.apikey)
kind: Secret
type: kubernetes.io/basic-auth
metadata:
name: ibm-cr-secret
annotations:
tekton.dev/docker-0: $(params.container-repo-url)
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: $(uid)-pvc
spec:
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
spec.params
are defining parameters that are either supplied by anEventListener
orTriggerBinding
, or useful default values for parameters defined in theresourcetemplates
section. More information about defaults is provided within the Meaningful defaults section of this tutorial.resourcetemplates
is the section that defines the Tekton resource to be created. Check the official Tekton documentation for the list of supported resources that can be defined here. As of the date I wrote this tutorial, this was in alpha development.PipelineRun
defines thePipelineRun
that is used in the execution of your pipeline. This is covered in more depth within the PipelineRun section of this tutorial.Secret
defines a secret that is used in association with theservice-account
, which is covered in more detail within the service-account section of this tutorial.PersistentVolumeClaim
defines a persistent volume that is used as the workspace for the pipeline. More about that is discussed within the project-pipeline section of this tutorial.
For more information about TriggerTemplate
, visit the official API documentation.
PipelineRun
A PipelineRun
is a specific instance of a Pipeline
execution. A PipelineRun
is roughly equivalent to an instance of a class in Java. A PipelineRun
defines the specific values be used in a Pipeline
. Let’s look at the PipelineRun
resource defined in the TriggerTemplate:
- apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
generateName: $(params.deployment-image)-build-
spec:
serviceAccountName: service-account
pipelineRef:
name: project-pipeline
params:
- name: path-to-context
value: $(params.path-to-context)
- name: path-to-deployment-file
value: $(params.path-to-deployment-file)
- name: path-to-dockerfile
value: $(params.path-to-dockerfile)
- name: api-url
value: $(params.api-url)
- name: container-repo-url
value: $(params.container-repo-url)
- name: container-repo-namespace
value: $(params.container-repo-namespace)
- name: cluster-namespace
value: $(params.cluster-namespace)
- name: deployment-image
value: $(params.deployment-image)
- name: name-of-cluster
value: $(params.name-of-cluster)
- name: cluster-region
value: $(params.cluster-region)
- name: git-access-token
value: $(params.git-access-token)
- name: git-repo-url
value: $(params.git-repo-url)
- name: commit-id
value: $(params.commit-id)
- name: deployment-image-placeholder
value: $(params.deployment-image-placeholder)
workspaces:
- name: git-repo
persistentVolumeClaim:
claimName: $(uid)-pvc
Within the spec
of PipelneRun
are the following fields:
pipelineRef
specifies the target pipeline, which isproject-pipeline
here.params
is the array of parameters to be passed into the pipeline. The name of aparam
must match the name of theparam
within the pipeline for it to be mapped.workspaces
is a location to store resources. In this case, you are storing a cloned Git repository in the workspace.
For more information about PipelineRun
, read the official API documentation.
service-account
When a pipeline is executed, Tekton creates resources and needs a service account to perform these actions.
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-account
secrets:
- name: ibm-cr-secret
In this ServiceAccount
definition, you are associating it with the ibm-cr-secret
that you created in project-trigger-template. The secret creates an association with a container repository located on IBM Cloud. When the build-image-and-push-image task is executed, the secret is used to authenticate with the container repository.
project-pipeline
A pipeline is the parameters, resources, and tasks that are executed to accomplish the pipeline goal. As I mentioned in the PipelineRun section, a pipeline is analogous to a class in Java.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: project-pipeline
spec:
params:
- name: path-to-context
- name: path-to-deployment-file
- name: path-to-dockerfile
- name: cluster-namespace
- name: api-url
- name: container-repo-url
- name: container-repo-namespace
- name: deployment-image
- name: deployment-image-placeholder
- name: name-of-cluster
- name: cluster-region
- name: git-access-token
- name: git-repo-url
- name: commit-id
workspaces:
- name: git-repo
description: Workspace for holding the cloned source code from the Git repo
tasks:
- name: git-clone
taskRef:
name: git-clone-repo
params:
- name: git-access-token
value: $(params.git-access-token)
- name: repository
value: $(params.git-repo-url)
- name: revision
value: $(params.commit-id)
workspaces:
- name: output
workspace: git-repo
- name: build-artifact-from-source
taskRef:
name: maven-build-java-artifact-from-source
runAfter:
- git-clone
params:
- name: mvn-goals
type: array
value: ["package"]
- name: path-to-context
value: $(params.path-to-context)
workspaces:
- name: source
workspace: git-repo
- name: build-image-send-to-cr
taskRef:
name: build-image-and-push-image
runAfter:
- build-artifact-from-source
params:
- name: container-repo-url
value: $(params.container-repo-url)
- name: container-repo-namespace
value: $(params.container-repo-namespace)
- name: deployment-image
value: $(params.deployment-image)
- name: path-to-context
value: $(params.path-to-context)
workspaces:
- name: source
workspace: git-repo
- name: update-image-ref-in-deployment
taskRef:
name: update-yaml-file
runAfter:
- build-image-send-to-cr
params:
- name: path-to-deployment-file
value: $(params.path-to-deployment-file)
- name: path-to-context
value: $(params.path-to-context)
- name: placeholder-name
value: $(params.deployment-image-placeholder)
- name: replacement-value
value: "$(tasks.build-image-send-to-cr.results.full-image-path)"
workspaces:
- name: source
workspace: git-repo
- name: deploy-image-to-ibm-cloud
taskRef:
name: deploy-image-to-ibm-cloud
runAfter:
- update-image-ref-in-deployment
params:
- name: path-to-deployment-file
value: $(params.path-to-deployment-file)
- name: path-to-context
value: $(params.path-to-context)
- name: name-of-cluster
value: $(params.name-of-cluster)
- name: cluster-region
value: $(params.cluster-region)
- name: api-url
value: $(params.api-url)
- name: cluster-namespace
value: $(params.cluster-namespace)
workspaces:
- name: source
workspace: git-repo
params
is the defined list of parameters that will be used within the pipeline.workspaces
is the list of directories that can be referenced within the execution of pipeline for the retrieval and storing of artifacts and outputs.name
is the name of a workspace within the pipeline for reference.
tasks
is the list of tasks to be executed by the pipeline when it is run.name
is the name of the task within the pipeline.taskRef
is a reference to the name of the defined task resource.runAfter
references the task(s) that this task should be executed after. Note: Not setting this will have the task executed at the start of the pipeline execution.params
are values to be passed into the task when it’s executed.workspaces
is the workspaces that will be used by a task for retrieving of storing artifacts and outputs.name
is the name of the workspace defined by the task.workspace
is the name of the workspace defined by the pipeline that will be passed to the task.
For more information about Pipeline
, read the official API documentation.
Tasks
Tasks are where the actual work towards reaching the pipeline’s goal, which is building and deploying an application, happens. To continue the theme of comparing Tekton concepts to a Java application, if a Pipeline
is a class and a PipelineRun
is an instance of a class, then tasks are the methods of a class.
A task can be composed a set of steps. Again, to continue the programming metaphor, the steps within a task should be cohesive with the goal of a task. As in programming, where an entire program can be written in a single class, so can an entire Tekton pipeline be written in a single task. Though, in both cases, you are left with something that is difficult to maintain and reuse.
Let’s take a look at the content of the tasks in the pipeline. Note that the git-clone
task is not covered in this section, as it is part of a public Tekton catalog, which is covered in more depth under the Tekton catalogs section.
For more information about Tasks
, check the official API documentation.
maven-build-java-artifact-from-source
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: maven-build-java-artifact-from-source
spec:
workspaces:
- name: source
params:
- name: mvn-goals
type: array
description: Goals to be run during maven build step
default: ["package"]
- name: path-to-context
description: Path to maven POM.xml file
default: .
steps:
- name: list-src
image: alpine
command:
- "ls"
args:
- "$(workspaces.source.path)"
- name: mvn
image: gcr.io/cloud-builders/mvn
workingDir: /workspace/source/$(params.path-to-context)
command: ["/usr/bin/mvn"]
args:
- "$(params.mvn-goals)"
workspaces
is a list of workspaces that will be used by the task for retrieval and storage of artifacts and task outputs.params
is a list of parameters provided to the task from the pipeline, or default values defined. (Note: In Tekton, there are two types of parameters: strings and arrays. By default, parameters are strings and must be explicitly defined as an array if an array value is used.)steps
is a list of the steps to be executed by the task. Steps reference a container image to be executed, along with any commands or arguments to be passed into the container. Volumes can also be mounted to the image through theworkingDir
field. (Note: The pathing inworkingDir
of/workspace/source/
matches the spec of workspace:source. So a workspace named cache could similarly be referenced as/workspace/cache/
.
Steps are executed sequentially in the order that they are defined within the file. Switching the position of the mvn
step with the list-src
step would change their execution order.
build-image-and-push-image
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: build-image-and-push-image
spec:
workspaces:
- name: source
params:
- name: path-to-context
description: The path to the build context, used by Kaniko - within the workspace
default: .
- name: path-to-dockerfile
description: The path to the dockerfile to build
default: Dockerfile
- name: container-repo-url
description: Base url to the container repo
- name: container-repo-namespace
description: Namespace image is being stored under
- name: deployment-image
description: Name of image to be deployed
results:
- name: full-image-path
description: The full path to the newly created image
steps:
- name: build-and-push
image: gcr.io/kaniko-project/executor
env:
- name: BUILD_NUMBER
valueFrom:
fieldRef:
fieldPath: metadata.annotations['devops.cloud.ibm.com/build-number']
command:
- /kaniko/executor
args:
- "--dockerfile=$(params.path-to-dockerfile)"
- "--destination=$(params.container-repo-url)/$(params.container-repo-namespace)/$(params.deployment-image):$(BUILD_NUMBER)"
- "--context=dir:///workspace/source/$(params.path-to-context)"
- name: print-full-image-path
image: bash:latest
env:
- name: BUILD_NUMBER
valueFrom:
fieldRef:
fieldPath: metadata.annotations['devops.cloud.ibm.com/build-number']
script: |
#!/usr/bin/env bash
$(params.container-repo-url)/$(params.container-repo-namespace)/$(params.deployment-image):$(BUILD_NUMBER) | tee /tekton/results/full-image-path
Generated Environment Variables
When executing a Tekton pipeline, IBM Cloud generates some dynamic values and adds them as annotations in the namespace where the Tekton pipeline is executing. You can view all of the properties in the IBM Cloud Continuous Delivery documentation. This task references thedevops.cloud.ibm.com/build-number
annotation as a means of generating a unique tag for the image that was just built:env: - name: BUILD_NUMBER valueFrom: fieldRef: fieldPath: metadata.annotations['devops.cloud.ibm.com/build-number']
Project Kaniko
Thebuild-and-push
step uses thegcr.io/kaniko-project/executor
image. This image is part of the kaniko project, which provides pre-built Docker images that are designed to make many common pipeline tasks easier to execute.Authenticating to container repo
In the project-trigger-template, theibm-cr-secret
secret was defined with an annotation oftekton.dev/docker-0: $(params.container-repo-url)
, which should resolve to betekton.dev/docker-0: us.icr.io
if no changes were made.ibm-cr-secret
was attached to theServiceAccount
that was used by Tekton in the service-account definition file in this task because the same base URL is being used for the container repository.results
Tasks can emit results to a pipeline for use by either the pipeline or by other tasks in the pipeline. In this case, the full path to the newly published image is emitted for use by a later task. Results are referenced in a pipeline by$(tasks.[name-of-task-in-pipeline].results.[name-of-result])
. Therefore, the previous result resolves to be$(tasks.build-image-send-to-cr.results.full-image-path)
, as seen in the project-pipeline definition file.
update-yaml-file
This is a simple task for updating values in a YAML file. Although, by default, this task will seek to update a deployment.yaml
file. In the context of this pipeline, the task is the reference to the image that Kubernetes pulls with the result value emitted from the build-image-and-push-image task.
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: update-yaml-file
spec:
params:
- name: path-to-context
description: The path to the build context, used by Kaniko - within the workspace
default: .
- name: path-to-deployment-file
description: The path to the YAML file to deploy within the Git source
default: deployment.yaml
- name: placeholder-name
description: Placeholder in YAML file that is too be replaced
- name: replacement-value
description: The value that will replace the place holder
workspaces:
- name: source
steps:
- name: update-yaml
image: alpine:3.12
command: ["sed"]
args:
- "-i"
- "-e"
- "s;$(params.placeholder-name);$(params.replacement-value);g"
- "$(workspaces.source.path)/$(params.path-to-context)/$(params.path-to-deployment-file)"
No additional concepts are covered in this task, so an in-depth explanation is not necessary.
deploy-image-to-ibm-cloud
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: deploy-image-to-ibm-cloud
spec:
params:
- name: path-to-context
description: The path to the build context, used by Kaniko - within the workspace
default: .
- name: path-to-deployment-file
description: The path to the YAML file to deploy within the Git source
default: deployment.yaml
- name: name-of-cluster
description: Name of cluster to deploy image to
- name: cluster-region
description: Region where cluster is located
- name: api-url
description: API URL for interacting with IBM Cloud
default: cloud.ibm.com
- name: cluster-namespace
description: The namespace being used within the k8s cluster
default: default
workspaces:
- name: source
steps:
- name: deploy-app
image: ibmcom/pipeline-base-image:2.7
env:
- name: IBMCLOUD_API_KEY
valueFrom:
secretKeyRef:
name: secure-properties
key: apikey
command: ["/bin/bash", "-c"]
args:
- set -e -o pipefail;
ibmcloud login -a $(params.api-url) -r $(params.cluster-region);
export IKS_BETA_VERSION=1;
ibmcloud ks cluster config -c $(params.name-of-cluster);
kubectl apply -n $(params.cluster-namespace) -f $(workspaces.source.path)/$(params.path-to-context)/$(params.path-to-deployment-file);
This task makes use of ibmcom/pipeline-base-image
. This is a Docker image maintained by the IBM Cloud team that is pre-installed with many tools needed for interacting with the IBM Cloud, such as the IBM Cloud CLI. To see a full list of pre-installed tools and changes between versions, visit the IBM Cloud Continuous Delivery documentation about working with versioned base images. You can also find specially recognized parameters within the Environment properties and resources section of the documentation.
Secure properties
When a secure property value is defined as an environment property, as you did in Step 10, it is automatically written to the secure-properties
Kubernetes Secret
. Alternatively, a non-secure property is written to a ConfigMap
named envrionment-properties
.
In deploy-image-to-ibm-cloud, the secure property apikey
, which you defined early, is retrieved from secure-properties
and passed into the container by using the IBMCLOUD_API_KEY
parameter. This is a special value to the ibmcom/pipeline-base-image
container image, which tells it to log in to the IBM Cloud CLI as an API user.
Tekton catalogs
As I mentioned previously, a central architectural goal within Tekton is reusability. A benefit of this decision is that Tekton resources can not only be shared within an organization, but also shared publicly across organizations. There are a number of publicly available Tekton catalogs that provide solutions for common problems, such as git-clone
, publishing to a container repository, and deploying an application to a Kubernetes cluster. In your pipeline, the git-clone
task comes from a public Tekton catalog.
When building a pipeline in Tekton, consider searching Tekton catalogs for the tasks you want to execute. This will not only save you time, but they will probably work better too. Here are a few published Tekton catalogs for your reference:
Why automate?
As you learned from this tutorial, automating deployments is not a trivial task. Automating a deployment requires you to learn new skills, such as Tekton, Jenkins, GitHub actions, or some other framework. It also requires a fair bit of effort to understand all of the tasks involved with deploying an application and finding a reliable way to script them out. Let’s take a look at some of the benefits of automated deployment beyond simply deploying code faster.
Assembly instructions included
A key theme in this Living on the cloud series is about decreasing friction in your development experience. A common source of friction, at least in my experience as a developer, is when key elements and information about an application are spread across many locations. This slows down your ability to understand how an application functions and is used. It also makes it difficult to track changes to that application.
So far, you have the Dockerfile
and deployment.yaml
file located in the same repository as your code. The Dockerfile
that describes the runtime of the application and the deployment.yaml
describes how the application should be configured within the Kubernetes cluster. Through this tutorial, you added the tekton-build.yaml
file to describe how the application should be built, tested, and deployed.
Forthcoming tutorials will continue along this theme by looking at automated documentation generation. While it will never be possible to contain all of the information about an application in a single location, you can improve your development experience by keeping much of the description and behavior of the application in the same place.
Meaningful defaults
Spring Boot rapidly gained popularity by providing default opinions on how to build a Spring Boot application. This greatly reduced the time and complexity required for building a Spring application. Similarly, by providing useful default values in Tekton files, you can reduce the time and complexity of setting up a Tekton pipeline, as well as information on how an application should be built. This was done in the Tekton resources defined within tekton-build.yaml
, TriggerTemplate
, and Pipeline
. Even the tasks default values were provided were possible.
Automation is not (just) about saving time and money
When organizations want to automate their deployment processes, typical motivating factors are to reduce the time and effort (labor) involved when deploying new production changes. Computers are able to execute actions much faster than their human counterparts, which allows deployments to complete in minutes instead of the hours that it would take a team of humans to complete.
While migrating to an automated deployment process provides the significant benefits of faster and cheaper deployments, they are not the only, and possibly not even the most significant, benefits. Two of the less talked about benefits are that automated deployments are much more auditable and reproducible than manual deployments.
Auditability and reproducibility
Humans are not particularly good when it comes to performing repetitive tasks, especially complicated ones. The lack of stimulation can lead to people executing them on autopilot, which can lead to steps that are skipped, or executed in the incorrect order, with the incorrect values, or in another incorrect manner. When a mistake happens, it might be difficult to track it down. The person who made the mistake might not even be aware they made it. Or, even if they are, it might not be easy to find it, which can make it harder to investigate why a deployment failed or why an application is not performing as expected in production.
When automating a deployment, tasks are not only performed faster than any human can, they are also more likely to be performed the same way every time. And when problems do occur, you have the logs and the code used to configure the CI/CD pipeline available for inspection to determine when and why a deployment failed. This makes deployment failures easier to investigate and provides greater confidence in an implemented fix.
Perhaps most important of all is that auditability and reproducibility are the qualities that move CI/CD from being a practice used by startups or non-mission critical applications into one that is a competitive advantage for organizations in even the most heavily regulated industries. Or organizations that deploy extremely important applications.
An organization in a tightly regulated industry might first wince at the thought of taking humans out of the loop. But highly auditable and reproducible builds often do a far superior job of fulfilling the goals and requirements of regulations than humans could ever accomplish.
Shift left
Finally, deployment automation is part of a virtuous cycle with an increasingly popular concept called shift-left testing. By moving testing to earlier in the development lifecycle, you can reduce the number and cost of software defects. Automated testing is a personal passion of mine.
Conclusion
Automating deployments is a key requirement for organizations that hope to successfully migrate to the cloud. Cloud platforms offer enormous flexibility and computational power, but your organization can only begin to truly take advantage of these new capabilities if you are not spending your time building applications.