Tutorial

Build and publish Docker images from a GitHub source using Red Hat OpenShift Pipelines

Achieve DevOps style continuous deployment for your Docker images

Introduction

A pipeline is a sequence of steps to represent a software development workflow (build, test, deploy) also known as continuous integration / continuous deployment (CI/CD). DevOps engineers are always looking to automate this workflow to minimize human error, improve time to deliver software, and produce consistent software artifacts.

Red Hat OpenShift Pipelines is a CI/CD solution based on the open source Tekton project. The main objective of Tekton is to enable DevOps teams to quickly create pipelines for activities involving simple, repeatable steps. A unique characteristic of Tekton that differentiates it from the previous CI/CD solutions is that Tekton procedure runs within a container that is specifically created just for that task. This provides a degree of isolation that supports predictable and repeatable task execution and ensures that development teams do not have to manage a shared build server instance.

Most OpenShift pipelines related blogs, articles, and how-to guides use complicated YAML files and a command-line interface (CLI) to create, deploy, and run pipelines, which isn’t easy for many to adapt to and follow!

This tutorial takes a different approach and uses the OpenShift GUI / console. There isn’t a single YAML or CLI command used in this tutorial, yet it shows you how to create a new pipeline from scratch, set up tasks to build from GitHub and deploy a Docker image to quay.io (a popular image registry) and how to achieve continuous delivery and deployment of Docker images by automating the whole process using OpenShift Pipeline triggers and GitHub webhooks.

This tutorial would benefit users interested in understanding OpenShift Pipelines without getting into complicated YAMLs and CLIs, users new to the pipelines concept, or users looking to get a quick understanding of how pipelines work.

Prerequisites

Before you build and publish Docker images from a GitHub source, make sure that the following prerequisites are fulfilled:

  • Access to an OpenShift cluster. OpenShift version 4.7.x or later should be installed as the pipelines feature was introduced in this version. I am using OpenShift version 4.8.xx on IBM Power Virtual Server. The steps mentioned in this tutorial should work on any OpenShift platform as pipelines functionality is similar irrespective of the underlying hardware architecture.
  • An OpenShift cluster configured with at least one storage class (to supply storage to the pipeline tasks).
  • Familiarity with basic Git (git clone, edit code in Git web UI, and commit) operations.
  • Familiarity with quay.io (registry for storing and building container images).

Estimated time

It would take around one hour to build and publish Docker images from a GitHub source using Red Hat OpenShift Pipelines.

Steps

Perform the following steps to build and publish Docker images from a GitHub source using Red Hat OpenShift Pipelines:

  1. Install OpenShift Pipelines operator (optional).
  2. Clone Git repository.
  3. Create a new quay.io repository to publish Docker image.
  4. Create a simple pipeline to build and publish Docker image from the GitHub source code.
    4a. Create a git-clone task.
    4b. Create s2i-python task.
  5. Run the pipeline.
    5a. Fix the secret bug (optional).
    5b. Execute the pipeline.
  6. Verify Docker image creation in quay.io.
  7. Validate the Docker image created in quay.io.
  8. Automate Docker image build using OpenShift Pipeline triggers and GitHub webhooks.
    8a. Set up pipeline triggers.
    8b. Create a event listener HTTPS route URL (optional).
    8c. Set up GitHub webhooks.
  9. Verify if a GitHub code change creates a new Docker image.
  10. Validate if the new version of the Docker image works.

1. Install OpenShift Pipelines operator (optional)

(Skip this step if its already installed in your OpenShift cluster)

  1. Make sure you are in the Administrator persona.

    Figure 1
    View larger image

  2. Navigate to OperatorHub, search for pipeline, and click the Red Hat OpenShift Pipelines tile.

    Figure 2

  3. On the page that shows the different installation options, retain the default values for this tutorial and click Install.

    Figure 3

  4. The Install Operator page helps you with options to specify the update channel to subscribe to, the project or namespace the operator will be visible, and so on. For the example used in this tutorial, retain the default values and click Install.

    Figure 4

  5. Wait for the operator to get installed, and this might take a few minutes.

    Figure 5

  6. Verify the installation by clicking the Installed Operators tab. Pipeline Operator should be listed there.

    Figure 6
    View larger image

  7. Notice that the Pipelines menu is now displayed.

    Figure 7

  8. Switch to Developer persona and ensure that the Pipelines menu is available there as well.

    Figure 8

Congratulations! You have successfully installed Pipelines Operator and OpenShift Pipelines functionality is now available in your cluster.

2. Clone Git repository

  1. Clone the following Git repository as we will be using this simple Pyflask code in the GitHub repository to create a Docker image. You need to clone this as you require ownership permissions to edit code and create webhook (covered later in this tutorial).

    GitHub repository to fork/clone: https://github.com/ocp-power-demos/s2i-pyflask-demo

  2. After cloning, notice that your Git repository URL looks as follows:
    https://github.com/<your_github_username>/s2i-pyflask-demo

In this tutorial, I am using a similar GitHub repository of mine located at: https://github.com/dpkshetty/pipeline-s2i-pyflask-demo

3. Create a new quay.io repository to publish Docker image

  1. Login to https://quay.io/ (create a new login if needed).
    My login ID is dpkshetty.
  2. Click the Repositories tab and then click Create New Repository to create a new repository to host the Docker image we will be creating in this tutorial.

    Figure 9
    View larger image

  3. In the resulting page, enter a name for your repository (I am using demos) and click Public to make it visible and accessible to others.

    Figure 10

  4. Scroll down, retain the default values, and click Create Public Repository.

    Figure 11

  5. Notice that your new repository (for example demos in my case) is listed under Repositories.

    Figure 12

    Base URL for my repository (to pull or push Docker images) is: quay.io/dpkshetty/demos

    In your case it will be quay.io/<your_username>/demos

    In order to be able to push content to this repository, we need to create user credentials with the right permissions. In quay.io, this can be achieved by creating a new robot account. Let’s create one.

  6. Click your username and then click Account Settings.

    Figure 13
    View larger image

  7. Click the Robot Accounts icon, and then click Create Robot Account.

    Figure 14
    View larger image

  8. In the resulting form, enter a name (demos in my case) for your new robot account and click Create robot account.

    Figure 15

  9. In the next form, select the demos repository, select the Write permission, and click Add Permissions. By doing this we are allowing write (hence push) privileges to this robot account.

    Figure 16

  10. On the Robot Accounts page, notice that your new robot account is created successfully.

    Figure 17
    View larger image

Congratulations! You have successfully created a new quay.io repository and a new robot account with write (hence push) privileges. We will be using this repository and the associated robot account to publish our Docker image (later in this tutorial).

4. Create a simple pipeline to build and publish Docker image from the GitHub source code

  1. Switch to the Developer persona and create a new project (named tutorial in my case).

    Figure 18 Figure 19

  2. Click Pipelines and then click Create Pipeline.

    Figure 20

  3. On the Pipeline builder view that enables you to create a new pipeline, enter a name to the pipeline (create-pyflask-image in my case).

    Figure 21

4a. Create a git-clone task

  1. From the Select Task drop-down list, select git-clone.

    Figure 22

  2. Click the git-clone task to view its properties pane on the right side of the console.

    Figure 23
    View larger image

  3. Update the fields in the git-clone properties pane with the following values:

  4. Scroll down until you see the Workspaces section. Pipeline needs a workspace (storage area), but we don’t have any created yet. Thus, notice that the Select workspace field is disabled, and that’s expected!

    Figure 25

    Note that a pipeline has multiple tasks, and it needs a shared or a common storage to pass data between them. For example, the git-clone task will copy the source code which needs to be accessed by the next task (s2i-python task, covered further in the tutorial) which will build the Docker image. A workspace provides that common storage between tasks.

  5. To create a workspace, go back to the Pipeline builder view/page (the middle pane in the browser), scroll down until you see the Workspaces section, click Add workspace , and enter a name to the workspace (my-workspace in my case).

    Figure 26 Figure 27

  6. Click the git-clone task in the pipeline, and on the properties pane, scroll down to the Workspaces section. From the output drop-down list, select my-workspace . With this, we have completed the first (git-clone) task.

    Figure 28

4b. Create s2i-python task

  1. Now, let’s create the next task, s2i-python, which helps to build the source code into a Docker image and push it to the quay.io registry. Go back to the pipeline builder view, hover the mouse pointer over the git-clone task, and click the “+” sign to the right of the git-clone task to add a new task.

    Figure 29 Figure 30

  2. You can see a new Select Task drop-down list created. From this list, select the s2i-python task.

    Note: In this example, we are selecting the s2i-python task because the application code in the GitHub repository is written in Python language.

    Figure 31 Figure 32

  3. Notice that a new s2i-python task is created, and is placed after the git-clone task.

    Figure 33

  4. Click the s2i-python task, and in the properties pane that is displayed on the right side, and enter the following values for the available fields:

    IMAGE = <URL of your quay.io repository/>:latest (‘quay.io/dpkshetty/demos:latest’ in my case)

    Note: Docker images are always of the form, <name:tag>. The <tag> field is used to represent the variants of an image (such as different versions, different architecture, different releases, and so on). Here we are using the latest tag to specify that it is the latest version of the Docker image.

    Workspaces = <select the workspace from drop-down list, you earlier created> (‘my-workspace’ in my case).

    Retain the default values for the remaining fields.

    Figure 34

  5. Click Create in the pipeline builder view to create the pipeline with the git-clone and s2i-python tasks.

    Figure 35

    The pipeline details page is displayed.

    Figure 36

  6. In case you missed to enter data for any of the fields or wish to edit them, click Actions -> Edit Pipeline. On the Pipeline builder page, select the task you wish to edit and update its properties. After completing the updates, click Save to confirm the changes made.

    Figure 37 Figure 38

5. Run the pipeline

  1. On the Pipeline details page, click Actions -> Start.

    Figure 39

  2. On the Start Pipeline page, specify the following values:

    • In the my-workspace field, select VolumeClaimTemplate, which automatically creates a PersistentVolumeClaim (PVC) of 1 GiB and provision storage for our workspace area.
    • In the Advanced options section, expand Show Credential options.

      Figure 40

    • We need to provide the quay.io credentials for the PipelineRun job to be able to access our quay.io account and push the Docker image. The credentials are provided as part of a OpenShift secret. To add the secret, click Add Secret.

      Figure 41

    • Enter a name for the secret (quay-demos in my case), and in the Server URL and Registry server address fields, enter quay.io.

      Retain the default values for the remaining fields.

      Figure 42

    • Navigate to your quay.io robot account (created in the ‘Create a new quay.io repository to publish Docker image’ step above) page, click the robot account (dpkshetty+demos in my case) and copy the username and password from the quay.io page to the Username and Password fields in the OpenShift console respectively. Click the tick mark symbol to save the secret.

      Figure 43 Figure 44 Figure 45

      Notice that the newly secret appears on the Start Pipeline page.

      Figure 46

  3. Ideally at this point, you would click Start to run the pipeline. But at the time of writing this tutorial, OpenShift 4.8.x has a small bug in the secret creation process. The secret is malformed and needs to be corrected before we can start running the pipeline. So for now, click Cancel (no worries the secret created stays) and return back to the Pipelines page.

    Figure 47

5a. Fix the secret bug (optional)

(This step is optional and can be skipped if your OpenShift cluster doesn’t have the secret bug)

  1. Click the Secrets tab.

    Figure 48

  2. Search for your secret (quay-demos in my case).

    Figure 49

  3. For your secret entry, click Edit Secret.

    Figure 50

  4. Notice that there are multiple redundant and malformed entries in the secret file (that’s the bug). All entries (except the last entry) have the Username and Password fields empty and the Registry server address field incomplete (first entry has q, second entry has qu, and so on).

    Figure 51 Figure 52

  5. In this example, only the last entry (scroll down to the end of the page) is valid, and the rest all are invalid. Click Remove credentials for all incorrect entries.

    Figure 53

  6. Notice that there should be only one valid entry with all the fields populated as shown in the following screen capture. Click Save.

    Figure 54

5b. Execute the pipeline

  1. Click the Pipelines tab to view the Pipelines page.

    Figure 55

  2. Click the three vertical dots option next to the pipeline and click Start.

    Figure 56

  3. From the my-workspace drop-down list, select VolumeClaimTemplate and click Show Credential options. Ensure that the previously created secret (quay-demos in my case) exists. Click Start to run the pipeline.

    Figure 57

  4. On the PipelineRun details page, notice that the first task (git-clone) has started.

    Figure 58

  5. On the Logs page, view the logs for each step being executed as part of the PipelineRun job.

    Figure 59
    View larger image

  6. Wait for both the tasks to complete. After successful completion, notice that the status of PipelineRun is Succeeded.

    Figure 60
    View larger image

  7. For each task, OpenShift creates a new pod and runs the task steps inside the pod. Click the TaskRuns tab to view the task runs associated with this PipelineRun, and the pods associated with each task.

    Figure 61
    View larger image

6. Verify Docker image creation in quay.io

  1. Navigate to quay.io and click Repositories.

    Figure 62

  2. Click the repository name (dpkshetty/demos in my case).

    Figure 63

  3. Click the Tags icon.

    Figure 64

  4. Notice that a new Docker image with tag, latest is created few minutes ago. We entered this tag in the s2i-python task’s properties for the Image field (See section, Create a simple pipeline to build and publish a Docker image from the GitHub source code).

    Figure 65

Congratulations! You have successfully created a Docker image in quay.io from the GitHub source code using OpenShift Pipelines.

7. Validate the Docker image created in quay.io

  1. To create a new application or pod in OpenShift using this newly created Docker image, navigate to your OpenShift console, click +Add and then click Container Images.

    Figure 66

  2. On the Deploy Image page, enter the required values in the following field:

    • Image name from external registry: Your quay.io Docker image URL (quay.io/dpkshetty/demos:latest in my case). After entering, press the Tab key and wait for OpenShift to validate the URL. You should see the Validated message below the URL which ensures OpenShift is able to view and access the Docker image URL.

      Figure 67

  3. In the General section, enter demos-app in the Application Name field, and demos in the Name field.

  4. In the Resources section, select Deployment as the resource type to generate, and in the Advanced options section, select the Create a route to Application checkbox.

    Figure 68

  5. Optionally, specify the options for a secure route (refer to the following note for details).

    Note: The steps to add a secure route can be skipped if you are using an OpenShift cluster where HTTP routes are allowed. In my case, OpenShift on IBM Power Virtual Server mandates to use HTTPS (secure HTTP) routes and plain HTTP routes are not supported. Hence, I need to perform the following steps. If unsure, Check with your cluster administrator for further details.

    • Expand Show advanced Routing options.

      Figure 69

    • Select the Secure Route checkbox.

      Figure 70

    • From the TLS Termination drop-down list, select Edge, and from the Insecure traffic drop-down list, select None.

      Figure 71

  6. Click Create.

    Figure 72

  7. On the Topology view, you can see an icon for your application being deployed. Click the deployment (D demos) and in the corresponding properties pane, click the Resources tab and wait for pod to be in Running state.

    Figure 73

  8. In the Routes section, click the location URL.

    Note: Depending on how your OpenShift cluster is configured, you may have a HTTPS or HTTP route (as explained earlier).

    Figure 74

  9. After successful completion, notice the welcome message from the Pyflask app in your browser window.

    Figure 75

  10. Also, check the other endpoints (such as /test and /version by appending it to the end of the URL) to validate if the entire application is working as expected.

    Figure 76 Figure 77

Congratulations! The Docker image you have created using OpenShift Pipelines is working successfully. You can now share your quay.io Docker image URL (‘quay.io/dpkshetty/demos:latest’ in my case) with anyone in the world to create or run applications from your Docker image.

8. Automate Docker image build using OpenShift Pipeline triggers and GitHub webhooks

Triggers capture the external events and process them to extract key pieces of information.

A PipelineRun job must run automatically for any new code changes in the Git repository and that is how we can achieve automation of the pipeline we created earlier using OpenShift Pipelines.

Triggers automate this process by capturing and processing any change event and by triggering a PipelineRun job that deploys the new image with the latest changes from your Git repository.

8a. Set up pipeline triggers

  1. In the Pipelines view click the three vertical dots icon next to the pipeline, and click Add Trigger.

    Figure 78 Figure 79

  2. On the Add Trigger page, enter the values for the following fields:

    • From the Git Provider type drop-down list, select github-push.
    • From the my-workspace drop-down list, select VolumeClaimTemplate.

    Then click Add.

    Figure 80

  3. On the Pipelines page, click the pipeline.

    Figure 81

  4. On the Pipeline details page, you can see that the event listener HTTP route URL has been created. Event listener is a component of pipelines trigger that listens to the external events.

    Figure 82

  5. Copy the HTTP URL (applicable only if your OpenShift cluster supports HTTP routes, and if not, navigate to the Create a event listener HTTPS route section and copy the HTTPS URL) and save it for later use. This URL (HTTP or HTTPS as applicable) will be used as a payload URL in GitHub webhooks setup (you can find details in the subsequent topics).

8b. Create a event listener HTTPS route URL (optional)

Note: The steps to add a secure event listener route can be skipped if you are using an OpenShift cluster where HTTP routes are allowed. In my case, OpenShift on Power Virtual Server mandates to use HTTPS (secure HTTP) routes and plain HTTP routes are not supported. Hence, I need to perform the following steps:

  1. Switch to Administrator persona, and click Networking -> Routes. On the Routes page, you can see a route entry, named el-event-listener-xxx, representing the event listener object, the associated HTTP route URL, and the corresponding event listener service object to which this route maps to.

    Figure 83

  2. Click Create Route and enter the required values for the following fields:

    • In the Name field, enter my-https-route (or any name as you wish).
    • From the Service drop-down list, select the existing el-event-listener-xxx service.
    • From the Target port drop-down list, select 8080 -> 8080 (TCP).
    • Select the Secure Route checkbox. In the same section, from the TLS termination drop-down list, select Edge and from the Insecure traffic drop-down list, select None.

      Retain the default values for the remaining fields, scroll down, and click Create.

      Figure 84 Figure 85 Figure 86

  3. Notice that a new route has been created with the HTTPS route URL.

    Figure 87

  4. Save this HTTPS URL for later use.

8c. Set up GitHub webhooks

GitHub webhooks allow external services to be notified when certain GitHub events happen. We are interested in the git push event. In this procedure, we will configure GitHub webhooks with the event listener HTTP or HTTPS (as applicable) URL as the payload URL, such that changes to the GitHub source code will be notified to event listener which will help trigger a newPipelineRun job.

  1. Navigate to your GitHub repo in browser (https://github.com/dpkshetty/pipeline-s2i-pyflask-demo in my case, and your URL will be different).

    Figure 88

  2. Click the Settings tab, and then click Webhooks.

    Figure 89

  3. Click Add webhook.

    Figure 90

  4. On the Add webhook page, enter the necessary data in the following fields:

    Note: GitHub might prompt you to authenticate one more time. if so, log in with your GitHub credentials.

    In the Payload URL field, enter the HTTP or HTTPS (as applicable) event listener route URL (in my case it is a HTTPS URL).

    From the Content type drop-down list, select application/json.

    You can retain the default values for the remaining fields, and then click Add webhook.

    Figure 91 Figure 92

  5. A new webhook will be created (the last entry in case you have multiple webhooks). It will have a tick-mark beside it (you may have to refresh the webhooks page in case you don’t see it automatically), which indicates that GitHub is able to ping or connect to your OpenShift cluster using the event listener HTTP or HTTPS (as applicable for your cluster) route URL.

    Figure 93

Congratulations! GitHub webhook is now connected with your OpenShift cluster and any changes to the GitHub source code will trigger a newPipelineRun job.

9. Verify if a GitHub code change creates a new Docker image

Let’s make a small code change in our GitHub repo and check if it indeed creates a new PipelineRun job, which in turn creates a new Docker image of our application.

  1. Navigate to your GitHub source repo, click app.py, and edit it by clicking the pencil icon.

    Figure 94

  2. In the edit mode, make the following two changes:

    • Modify the welcome message by adding Pipelines to make it ‘Pyflask Pipelines Demo’.
    • Upgrade the version to 2.0. Figure 95
  3. Scroll down to add a brief description for the changes made and click Commit Changes.

    Figure 96

  4. When committed successfully, a new PipelinRun job is triggered. Navigate to your OpenShift console, click Pipeline, and then click your pipeline. In the Pipeline details view, click the PipelineRuns tab.

    Figure 97

  5. Notice that a new PipelineRun job is running.

    Figure 98

  6. Wait for the PipelineRun job to finish.

    Note: While it is running, you may want to click the new PipelineRun job and view the logs to monitor the git-clone and s2i-python tasks that run as part of this new job (as we did before).

    Figure 99

  7. Switch to quay.io in your browser and verify that a new Docker image is pushed to the registry.

    Note: If you already have the quay.io registry page opened, refresh the page to see the latest information.

    Notice that a new image is just pushed!

    Figure 100

Congratulations! You have successfully used GitHub webhook and the pipelines trigger functionality to auto build and deploy a new Docker image in the event of a GitHub source code change.

10. Validate if the new version of the Docker image works

Perform the steps mentioned in the Validate the Docker image created in quay.io section to create a new application and verify that the application has the new code changes. Refer to a sample in the following screen capture.

The new welcome message:

Figure 101

The new version of the application:

Figure 102

Congratulations! This verifies that the new Docker image built from an automated PipelineRun job triggered by the GitHub source code change reflects the updates!

Summary

In this tutorial, you learnt how to create a simple pipeline to build and deploy a Docker image from the GitHub source code by mainly using the OpenShift GUI and without YAMLs and CLIs! You also learnt how to use the Pipeline Trigger functionality along with GitHub webhooks to automate the Docker image creation process. DevOps engineers are always looking to automate their software build, test, deploy lifecycle and pipelines provide an excellent way to automate the software development lifecycle.

While we hardcoded the GitHub repo and the quay.io repository URLs for this tutorial (to keep things simple) it is possible to enhance the pipeline further by generalizing it. You can use pipeline parameters (also known as params) as an input method for specifying the GitHub and quay.io repository URLs and thus make the pipeline more reusable by providing the ability to specify GitHub and quay.io repository (instead of hardcoding it) as part of each pipeline run.

I will leave that as a recommended exercise for anyone interested. As a hint, you need to use the Add Parameter option in the “Parameters” section of pipeline builder page to create new parameters and reference them using the $(params.<param-name>) syntax when populating the Tasks’ properties. Refer to the Red Hat OpenShift documentation for more details. Good luck!

Acknowledgment

I would like to thank Sebastien Chabrolles for helping with queries specific to pipelines and issues I encountered while creating this tutorial, and especially helping in mitigating the secret bug which was causing the PipelineRun job to fail.

Take the next step

Join the Power Developer eXchange Community (PDeX). PDeX is a place for anyone interested in developing open source apps on IBM Power. Whether you're new to Power or a seasoned expert, we invite you to join and begin exchanging ideas, sharing experiences, and collaborating with other members today!