More enterprises are valuing continuous integration and testing, since it allows for seamless, parallel testing. With a Jenkins pipeline, companies can automate testing so that developers can focus on other issues. If you’re an enterprise developer looking to maximize your company’s DevOps, this tutorial can help you quickly become knowledgeable on building pipelines as code.

Learning objectives

In this tutorial, we will install and configure Jenkins on the IBM Cloud Kubernetes Service(IKS). We’ll also use Helm to install Jenkins, which will then be set up to run CI pipelines on the same Kubernetes cluster.

The pipeline that we create will run a test stage, followed by a build stage if the test stage was successful. After both of those stages successfully complete, the container that’s built in the build stage will be pushed to the IBM Container Registry. By using the Container Registry, you have the added benefit of a built-in Vulnerability Advisor that scans for potential security issues and vulnerabilities (and it’s completely free).

Prerequisites

To complete this tutorial, you’ll need to do the following:

Estimated time

Completing this tutorial should take about 15 minutes.

Steps

  1. Install Tiller
  2. Install Jenkins
  3. Log in to Jenkins
  4. Configure Jenkins
  5. Build Docker images

1. Install Tiller

You should already have the Helm client installed, per the prerequisites, so now you need to to install the server-side component of Helm called Tiller. Tiller is what the Helm client talks to and it runs inside of your cluster by managing the chart installations. (For more information on Helm you can check out this Helm 101 repo.)

$ helm init

Running helm ls should execute without error. If you see an error that says something like Error: could not find a ready tiller pod, wait a little longer and try again.

2. Install Jenkins

Prior to installing Jenkins, you need to first create a persistent volume:

$ kubectl apply -f volume.yaml

Now you can install Jenkins by using the Helm chart in the stable repository. This is the default Helm repository, specifically this chart, which will be installed.

This chart has a number of configurable parameters. For this installation, the following parameters need to be configured:

  • rbac.install – Setting this to true creates a service account and ClusterRoleBinding, which is necessary for Jenkins to create pods.
  • Persistence.Enabled – Enables persistence of Jenkins data by using a PVC.
  • Persistence.StorageClass – When the PVC is created it will request a volume of the specified class. In this case, it is set to jenkins-pv, which is the storageClassName of the volume that was created previously. Setting this to the same value as the class name from volume.yaml ensures that Jenkins will use the persistent volume already created. Here’s how to set the same value:
$ helm install --name jenkins stable/jenkins --set rbac.install=true \
               --set Persistence.Enabled=true \
               --set Persistence.StorageClass=jenkins-pv

3. Log in to Jenkins

As part of the chart installation, a random password is generated and a Kubernetes secret is created. A Kubernetes secret is an object that contains sensitive data such as a password or a token. Each item in a secret must be base64 encoded. This secret contains a data item named ‘jenkins-admin-password’, which must be decoded.

The following command gets the value of that data item from the secret named ‘jenkins’ and decodes the result.

$ printf $(kubectl get secret --namespace default jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo

A load balancer is then created for Jenkins. When that’s ready, you can log in with the username admin and the password from the previous step. Run the following commands to determine the login URL for Jenkins.

$ export SERVICE_IP=$(kubectl get svc --namespace default jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
$ echo http://$SERVICE_IP:8080/login

4. Configure Jenkins

Configure credentials

In order for Jenkins to be able to launch pods for running jobs, you have to configure the service account credentials.

Nagivate to Manage Jenkins > Configure Jenkins > Cloud > Credentials, then select “Add.”

New credentials

Configure containers

By default, the agent pod contains just one container, the jnlp-slave. However, you are not restricted to running everything on that container. You can add any number of containers to the agent pod. For this pipeline example, you’ll need to add a NodeJS container. This is configured in Manage Jenkins > Configure Jenkins > Cloud.

NodeJS container

Create a pipeline

Pipelines are used to model a build process. Agents are dynamically created for each pipeline run. Those agents are a pod running one or more containers and a pod is just a collection of one or more containers. Each Jenkins agent is one pod and the containers that comprise that pod are what gets configured in the previous step.

Steps in the pipeline can be run on any of the containers in the agent pod.

First, create a new pipeline item by selecting “Pipeline” and click “OK.” The image below shows the new item screen with “Pipeline” selected. “Test” is the name of our pipeline. The text that follows is the actual pipeline definition that gets entered after you hit “OK” on the page in the screenshot.

Pipeline

Then use the following pipeline:

pipeline {
  agent any

  stages {
    stage('Test') {
      steps {
        container('nodejs') {
          sh "node --version"
        }
      }
    }
  }
}

This simple pipeline only has one stage named ‘Test’. This stage has only one step that executes just one single command. The key part of the step definition is the container('nodejs') statement, which tells Jenkins to run the step on the container named ‘nodejs’ that was configured in the previous step above. Each item that is prefixed with sh will be executed in a new shell. In a real pipeline, this is where you’d do actual work, such as running unit tests.

Add additional containers

Different steps of the pipeline can run on different containers. This allows you to do things like run tests for different parts of a codebase in language-specific containers. Because the stages execute sequentially, you could also have a ‘deploy’ stage that runs after tests pass to deploy your application. It could be a kubectl pod to deploy to Kubernetes or a helm pod to update an existing chart.

5. Build Docker images

Docker images can be built as part of the pipeline. Create another container named build by using the alpine image. The Docker socket from the host also needs to be shared with the agent containers by creating a host path volume.

Volume configuration is under Manage Jenkins > Configure Jenkins > Cloud.

Docker volume

A new stage can then be added to the pipeline:

stage('Build') {
  steps {
    container('build') {
        sh 'apk update && apk install docker'
        sh 'docker build -t application .'
      }
    }
  }
}

Push images to the IBM Container Registry

Finally, we can push our images to the IBM Container Registry. Automatically pushing images requires an API key:

$ ibmcloud iam api-key-create

This API key can be passed into the pipeline via environment variables. In the container configuration, add a new environment variable named REGISTRY_TOKEN. Update the build stage to log in to the registry and push the image.

stage('Build') {
  steps {
    container('build') {
        sh 'apk update && apk install docker'
        sh 'docker login -u token -p ${REGISTRY_TOKEN} ng.registry.bluemix.net'
        sh 'docker build -t application .'
        sh 'docker tag ${IMAGE_REPO}/application application'
      }
    }
  }
}

Next steps

Congratulations, you’ve completed the tutorial! So what’s next for you, now that you know how to create a Jenkins pipeline? You can extend your Jenkins knowledge by learning how to create a Canary deployment Jenkins and Istio.