Deploy a serverless API with Tekton and Terraform

Companies often find it hard to deliver software to customers due to excessive manual labor or the lack of automation and consistency in their operations. This is where continuous integration (CI) and continuous delivery (CD) practices can make your work a little bit easier.

The IBM Cloud Continuous Delivery service has two types of delivery pipelines: classic and Tekton. The classic delivery pipelines are created graphically, with the status embedded in the pipeline diagram. On the other side, the Tekton delivery pipelines are created within YAML files that define pipelines and their dependencies as a set of Kubernetes resources.

This tutorial provides you with a quick start on how to use Tekton pipelines to deploy some serverless IBM Cloud Functions actions. As a bonus, you use IBM Cloud Schematics automation to create some resources that are necessary for the actions to work.

About Tekton

Tekton is an open source framework for creating CI/CD systems designed to run on top of Kubernetes, using all of its advantages. Moreover, it provides you with the following benefits:

  • Flexibility. Tekton entities are fully customizable, allowing you to define a catalog of building blocks for your development team to use in a wide variety of scenarios.
  • Reusability. Since Tekton resources are defined in portable YAML files, anyone within your organization can use a defined pipeline or a set of tasks from different pipelines. This prevents duplication.
  • Expandability. The Tekton Catalog is a community-driven repository; you can create or expand your pipelines based on catalog components.

At the time of writing this tutorial, Tekton consists of the following components:

  • Pipelines
    Basic building blocks of a CI/CD workflow.
  • Triggers
    Event triggers for the workflow.
  • CLI
    Command-line interface to interact with the workflow.
  • Dashboard
    Web-based UI for the pipelines.
  • Catalog
    Repository with Tekton building blocks such as Tasks and Pipelines.
  • Hub
    Web-based graphical interfaces for accessing the Tekton Catalog.
  • Operator
    Kubernetes Operator that you can use to install, update, and remove Tekton projects on your Kubernetes cluster

To follow this tutorial, you must be familiar with the following Tekton concepts:

  • A Pipeline is a collection of tasks; each task in a pipeline can use the output of a previously execute task.
  • A Task is a collection of steps; each step invokes a specific build tool by using a set of inputs and produces a set of outputs that the next step may use.

Diagram of Tekton pipelines, tasks, and steps

On Tekton, you can run your pipelines by using a resource called PipelineRun; this instigates your pipeline to run all of its tasks. On IBM Cloud, you run your pipelines by using Triggers. Instead of defining a PipelineRun by itself, you define the following resources for your pipeline to run when an event occurs:

  • TriggerTemplate
    Custom resource that can template other resources; parameters can be substituted within the resource template.
  • TriggerBinding
    Custom resource that binds against the incoming events. It allows you to capture fields from the event to store them as parameters, and apply them on the Trigger Template.
  • EventListener
    Custom resources that allows the pipeline to process incoming HTTP events with JSON payloads.

So, how does this work? Your EventListener receives the incoming events, the TriggerBinding gets the parameters from the received event, and uses the TriggerTemplate with the event parameters to create the PipelineRun.

Prerequisites

  • An IBM Cloud account. If you don’t already have an account, you can easily register for one.
  • A GitHub account.

Estimated time

You should be able to complete this tutorial within 30 minutes.

Steps

  1. Fork the GitHub repository
  2. Deploy the resources with IBM Cloud Schematics
  3. Create and configure the IBM Cloud Continuous Delivery service

1. Fork the GitHub repository

Go to the GitHub repository for this tutorial and click Fork to copy the code to your own GitHub account.

The GitHub repository has the following structure:

  • .tekton
    Folder that includes the files required for the pipeline to work.
  • Functions
    Parent folder for the CRUD actions for a sample movie tickets database (create, update, list, and delete).
  • Terraform
    Folder that includes the Terraform template that you deploy on IBM Cloud Schematics.

2. Deploy the resources with IBM Cloud Schematics

First, you must deploy some resources for the actions to work properly. To do this, you have to deploy a Terraform template with IBM Cloud Schematics. This Terraform template creates:

  • IBM Cloudant instance and database
  • Two resource keys: a manager key that IBM Cloud Schematics needs to create the database, and a writer key for the functions to be able to work with the database
  • IBM Cloud Functions package to group the actions that you deploy
  • A binding to the Cloudant database to use the credentials as default parameters in the actions

Flow diagram of IBM Cloud Schematics

Follow these steps to deploy the Terraform template with IBM Cloud Schematics:

  • Open the IBM Cloud Schematics workspaces dashboard.

  • Click Create workspace.

  • To import the Terraform template, enter the URL path to the Terraform folder within the GitHub repository fork that you created in Step 1 into the GitHub, GitLab or Bitbucket repository URL field.

    If your repository is private, enter your personal access token in the corresponding field.

    From the Terraform version list, select terraform_v0.13.

    Click Save template information.

    Screen capture of Import your Terraform template page

  • Enter the following parameter values based on your own IBM Cloud account information and then click Save.

Parameter Value
ibmcloud_api_key Add your IBM Cloud API Key or create a new one by following the Creating an API key instructions.
basename Enter any prefix for your resources.
resource_group Keep as default or create a new resource group by following the Creating a resource group instructions.
namespace Add the name of your current IBM Cloud Functions namespace. If you don’t have one, follow the Creating an IAM-based namespace instructions.
region Enter us-south for this tutorial
cloudant_plan Enter lite to avoid costs for this tutorial
package_name Add any value
organization Enter your Cloud Foundry organization name or create a new one by following the Creating orgs in the console instructions.
space Enter your Cloud Foundry organization space (subgroup) or create a new oneby following the Creating spaces in the console instructions.

Screen capture of sample account parameters for the Terraform template

  • Go to the Settings tab and click Generate plan. This will help you to understand what is going to be created, updated, or deleted.

  • When you see that a new activity is executing on your workspace Activity page, click View log to get more information.

    Screen capture of the workspace Activity page

  • In the logs, you should see that four resources will be created, as the following screen capture demonstrates.

    Screen capture of log output that shows the Terraform plan generation status

  • When you return to the Activity list, click Apply Plan to start creating the resources. When the creation is complete, you should see in the logs that they are done.

    Screen capture of log output that shows the Terraform apply status

3. Create and configure IBM Cloud Continuous Delivery

Have you previously experienced a situation where you had a single code repository with different components within it, and you only want to build, test, or deploy them according to the changes they have instead of doing everything repeatedly per any change? Executing the whole build all over again can be time-consuming and may not be necessary.

In this tutorial scenario, you have 4 Node.js functions ready to be deployed on IBM Cloud Functions. But you don’t want to build and deploy all of them all of the time. So, this pipeline checks for code changes in each folder before building and deploying.

Flow diagram of the continuous delivery pipeline between the GitHub repository and IBM Cloud Functions

This pipeline contains several tasks, including repository clone, build, and deploy, which are described in the following sections.

Repository clone

The repository clone task performs Git cloning to bring the code to the workspace, which makes the code available to the next tasks.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: repository-clone
spec:
  workspaces:
    - name: task-workspace
      mountPath: /working
  params:
    - name: app-dirs
    - name: git-repository
    - name: git-branch
  steps:
    - name: execute-script
      image: ibmcom/pipeline-base-image:2.9
      envFrom:
        - configMapRef:
            name: environment-properties
        - secretRef:
            name: secure-properties
      env:
        - name: appdirs
          value: $(params.app-dirs)
        - name: GIT_BRANCH
          value: $(params.git-branch)
        - name: GIT_REPO
          value: $(params.git-repository)
      command: ["/bin/bash", "-c"]
      args:
        - |
          cd /working
          echo "Cloning git repository"
          # get the right repo and branch
          if [ -z $GIT_BRANCH ]; then
            git clone -q $GIT_REPO .
          else
            git clone -q -b $GIT_BRANCH $GIT_REPO .
          fi

Build

The build task only performs the build by using webpack. Notice that it receives two parameters:APP_BASE, which should be the base folder of the functions, and APP_FOLDER, which is the folder of the functions itself. The task validates if the folder was changed by using the git log command. If the folder was changed, the task performs the build and exports a new environment variable with the true value. When the folder is not changed, the variable is set to false. Finally, that variable is written to the output variable of is_changed.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: build-function
spec:
  results:
    - name: is_changed
      description: app has changed
  workspaces:
    - name: source
      mountPath: /working
  params:
    - name: app-base
    - name: app-folder
  steps:
    - name: execute-script
      image: ibmcom/pipeline-base-image:2.9
      envFrom:
        - configMapRef:
            name: environment-properties
        - secretRef:
            name: secure-properties
      env:
      - name: APP_BASE
        value: $(params.app-base)
      - name: APP_FOLDER
        value: $(params.app-folder)
      command: ["/bin/bash", "-c"]
      args:
        - |
          cd /working
          if git log --format= -n 1 --name-only | grep -qw $APP_FOLDER; then
            echo "Folder changed"
            echo "Installing dependencies"
            cd $APP_BASE/$APP_FOLDER
            npm install
            echo "Building the function using webpack"
            npm run build
            export CHANGED_FOLDER=true
          else
            echo "Folder not changed"
            export CHANGED_FOLDER=false
          fi

          printf $CHANGED_FOLDER | tee $(results.is_changed.path)

Deploy

The deploy task logs in to ibmcloud through the CLI and executes the deployment script, which deploys the webpack build to IBM Cloud Functions.

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: deploy-app
spec:
  workspaces:
    - name: task-workspace
      mountPath: /working
  params:
    - name: target-region
    - name: package-name
    - name: function-name
    - name: space
    - name: organization
    - name: app-base
    - name: app-folder
  steps:
    - name: execute-script
      image: ibmcom/pipeline-base-image:2.9
      envFrom:
        - configMapRef:
            name: environment-properties
        - secretRef:
            name: secure-properties
      env:
      - name: REGION
        value: $(params.target-region)
      - name: PACKAGE_NAME
        value: $(params.package-name)
      - name: FUNCTION_NAME
        value: $(params.function-name)
      - name: ORG
        value: $(params.organization)
      - name: SPACE
        value: $(params.space)
      - name: APP_BASE
        value: $(params.app-base)
      - name: APP_FOLDER
        value: $(params.app-folder)
      - name: PIPELINE_APIKEY
        valueFrom:
          secretKeyRef:
            name: secure-properties
            key: apikey
      command: ["/bin/bash", "-c"]
      args:
        - |
          cd /working
          echo "Deploying to IBM Functions"
          ibmcloud login -a cloud.ibm.com -r $REGION -o $ORG -s $SPACE --apikey $PIPELINE_APIKEY
          source ./scripts/deploy-function.sh

Pipeline

If you check the complete pipeline file, you see that it contains many tasks but references the tasks in the tasks file. For example, for the create-functions set of tasks, the build-create task is based on the build-function task explained earlier. The only difference is that, in this file, you assign values to the parameters and conditions for those tasks to run.

- name: build-create
    taskRef:
      name: build-function
    runAfter:
      - git-repo-clone
    params:
    - name: app-base
      value: "functions"
    - name: app-folder
      value: "create-tickets"
    workspaces:
    - name: source
      workspace: pipeline-ws

The deploy-create task is based on the deploy-app task, but it only runs when the build-create task of the is_changed pipeline variable is true. So, if the git log command doesn’t show that the folder changed, this task will not execute.

  - name: deploy-create
    taskRef:
      name: deploy-app
    when:
      - input: $(tasks.build-create.results.is_changed)
        operator: in
        values: ["true"]
    workspaces:
    - name: task-workspace
      workspace: pipeline-ws
    params:
      - name: package-name
        value: $(params.package-name)
      - name: function-name
        value: "create-tickets"
      - name: target-region
        value: $(params.region)
      - name: space
        value: $(params.space)
      - name: organization
        value: $(params.organization)
      - name: app-base
        value: "functions"
      - name: app-folder
        value: "create-tickets"

To configure the pipeline, follow these steps:

  • Open the IBM Cloud DevOps toolchains dashboard.

  • From the Location list, select the region where you want to create your toolchain and then click Create toolchain.

  • On the Create a Toolchain page, select the Build your own toolchain option.

    Screen capture of the Create a Toolchain page with the Build your own toolchain option highlighted

  • Enter the name for your toolchain and then click Create.

    Screen capture of the Create tab of the Build your own toolchain page

  • On your toolchain page, click Add Tool.

    Screen capture of the Add tool button

  • On the Add tool integration page, find and select the GitHub option.

    Screen capture of the Add tool integration page with the GitHub option highlighted

  • On the Configure GitHub page, update the fields with the specifications of your GitHub repository and click Create Integration.

    Screen capture of the Configure GitHub page

  • Back on your toolchain page, click Add tool again.

  • On the Add tool integration page, find and select the Delivery Pipeline option.

    Screen capture of the Add tool integration page with the Delivery Pipeline option highlighted

  • On the Configure Delivery Pipeline page, enter a name for the delivery pipeline, select Tekton from the Pipeline type list, and click Create Integration.

    Screen capture of the Configure Delivery Pipeline page

  • Back on your toolchain page, click Delivery Pipeline to configure the Tekton pipeline.

    Screen capture of an example toolchain page with the Delivery Pipeline box highlighted

  • To configure the Tekton pipeline, you must first add its definitions. Click Add, select your configured repository, and enter the path to the definition files (.tekton).

  • Click Add.

    Screen capture of the Tekton Pipeline Configuration page with the Add button highlighted

  • Look at the content of the files. If you are comfortable with them, click Save.

    Screen capture of the Consolidated Pipeline Definition Viewer page

  • Go to the Triggers section and click Add trigger.

  • Select the defined EventListener and Worker.

    Screen capture of the Triggers page

  • Click Save.

  • In the Environment properties section, you must configure the following properties:

Property name Type Description
apikey Secure API key for the toolchain to deploy on IBM Cloud Functions
git-branch Text Git branch to clone
git-repository Text Git repository to clone
organization Text IBM Cloud organization
package-name Text Package name to deploy the actions (must be same as what you defined within IBM Cloud Schematics in Step 2)
region Text IBM Cloud region
space Text IBM Cloud space

Screen capture of sample values entered on the Environment properties page

  • Click Save.

  • Click Run Pipeline. You should see all of the properties that you previously defined. Click Run to start the pipeline.

    Screen capture of sample values appearing on the Run Pipeline page

  • After a couple of seconds, you should see a new entry in the PipelineRuns section. Click that entry to view its status. You should see something similar to the following screen capture image.

    Screen capture of the PipelineRuns status page with the execute-script task log open

  • Go to the IBM Cloud Functions Actions page to verify that the actions deployed.

    Screen capture of the IBM Cloud Functions Actions page

Summary

Congratulations, you just deployed a serverless API with Tekton! In this tutorial, you learned how to configure a DevOps pipeline by using Tekton to deploy a serverless API in a monorepo. You also took advantage of IBM Cloud Schematics to automate the creation of cloud resources. Repetitive tasks such as application deployment and cloud infrastructure setup can become a burden in your daily tasks if these are not automated. The capabilities provided by Tekton and Terraform can aid you in your transformation to the DevOps culture.

Next steps

Now that you already have an application pipeline, you can go further and create an infrastructure pipeline as well. Managing your infrastructure within a pipeline will help you to establish a standardized workflow to test and validate your infrastructure changes before promoting them to production.