A pipeline is a sequence of steps to represent a software development workflow (build, test, deploy) also known as continuous integration / continuous deployment (CI/CD). DevOps engineers are always looking to automate this workflow to minimize human error, improve time to deliver software, and produce consistent software artifacts.
Red Hat OpenShift Pipelines is a CI/CD solution based on the open source Tekton project. The main objective of Tekton is to enable DevOps teams to quickly create pipelines for activities involving simple, repeatable steps. A unique characteristic of Tekton that differentiates it from the previous CI/CD solutions is that Tekton procedure runs within a container that is specifically created just for that task. This provides a degree of isolation that supports predictable and repeatable task execution and ensures that development teams do not have to manage a shared build server instance.
Most OpenShift pipelines related blogs, articles, and how-to guides use complicated YAML files and a command-line interface (CLI) to create, deploy, and run pipelines, which isn’t easy for many to adapt to and follow!
This tutorial takes a different approach and uses the OpenShift GUI / console. There isn’t a single YAML or CLI command used in this tutorial, yet it shows you how to create a new pipeline from scratch, set up tasks to build from GitHub and deploy a Docker image to quay.io (a popular image registry) and how to achieve continuous delivery and deployment of Docker images by automating the whole process using OpenShift Pipeline triggers and GitHub webhooks.
This tutorial would benefit users interested in understanding OpenShift Pipelines without getting into complicated YAMLs and CLIs, users new to the pipelines concept, or users looking to get a quick understanding of how pipelines work.
Prerequisites
Before you build and publish Docker images from a GitHub source, make sure that the following prerequisites are fulfilled:
Access to an OpenShift cluster. OpenShift version 4.7.x or later should be installed as the pipelines feature was introduced in this version. I am using OpenShift version 4.8.xx on IBM Power Virtual Server. The steps mentioned in this tutorial should work on any OpenShift platform as pipelines functionality is similar irrespective of the underlying hardware architecture.
An OpenShift cluster configured with at least one storage class (to supply storage to the pipeline tasks).
Familiarity with basic Git (git clone, edit code in Git web UI, and commit) operations.
Familiarity with quay.io (registry for storing and building container images).
Estimated time
It would take around one hour to build and publish Docker images from a GitHub source using Red Hat OpenShift Pipelines.
Steps
Perform the following steps to build and publish Docker images from a GitHub source using Red Hat OpenShift Pipelines:
Navigate to OperatorHub, search for pipeline, and click the Red Hat OpenShift Pipelines tile.
On the page that shows the different installation options, retain the default values for this tutorial and click Install.
The Install Operator page helps you with options to specify the update channel to subscribe to, the project or namespace the operator will be visible, and so on. For the example used in this tutorial, retain the default values and click Install.
Wait for the operator to get installed, and this might take a few minutes.
Verify the installation by clicking the Installed Operators tab. Pipeline Operator should be listed there.
Switch to Developer persona and ensure that the Pipelines menu is available there as well.
Congratulations! You have successfully installed Pipelines Operator and OpenShift Pipelines functionality is now available in your cluster.
2. Clone Git repository
Clone the following Git repository as we will be using this simple Pyflask code in the GitHub repository to create a Docker image. You need to clone this as you require ownership permissions to edit code and create webhook (covered later in this tutorial).
3. Create a new quay.io repository to publish Docker image
Login to https://quay.io/ (create a new login if needed). My login ID is dpkshetty.
Click the Repositories tab and then click Create New Repository to create a new repository to host the Docker image we will be creating in this tutorial.
In the resulting page, enter a name for your repository (I am using demos) and click Public to make it visible and accessible to others.
Scroll down, retain the default values, and click Create Public Repository.
Notice that your new repository (for example demos in my case) is listed under Repositories.
Base URL for my repository (to pull or push Docker images) is: quay.io/dpkshetty/demos
In your case it will be quay.io/<your_username>/demos
In order to be able to push content to this repository, we need to create user credentials with the right permissions. In quay.io, this can be achieved by creating a new robot account. Let’s create one.
Click your username and then click Account Settings.
In the resulting form, enter a name (demos in my case) for your new robot account and click Create robot account.
In the next form, select the demos repository, select the Write permission, and click Add Permissions. By doing this we are allowing write (hence push) privileges to this robot account.
On the Robot Accounts page, notice that your new robot account is created successfully.
Congratulations! You have successfully created a new quay.io repository and a new robot account with write (hence push) privileges. We will be using this repository and the associated robot account to publish our Docker image (later in this tutorial).
4. Create a simple pipeline to build and publish Docker image from the GitHub source code
Switch to the Developer persona and create a new project (named tutorial in my case).
Click Pipelines and then click Create Pipeline.
On the Pipeline builder view that enables you to create a new pipeline, enter a name to the pipeline (create-pyflask-image in my case).
4a. Create a git-clone task
From the Select Task drop-down list, select git-clone.
Click the git-clone task to view its properties pane on the right side of the console.
Scroll down until you see the Workspaces section. Pipeline needs a workspace (storage area), but we don’t have any created yet. Thus, notice that the Select workspace field is disabled, and that’s expected!
Note that a pipeline has multiple tasks, and it needs a shared or a common storage to pass data between them. For example, the git-clone task will copy the source code which needs to be accessed by the next task (s2i-python task, covered further in the tutorial) which will build the Docker image. A workspace provides that common storage between tasks.
To create a workspace, go back to the Pipeline builder view/page (the middle pane in the browser), scroll down until you see the Workspaces section, click Add workspace , and enter a name to the workspace (my-workspace in my case).
Click the git-clone task in the pipeline, and on the properties pane, scroll down to the Workspaces section. From the output drop-down list, select my-workspace . With this, we have completed the first (git-clone) task.
4b. Create s2i-python task
Now, let’s create the next task, s2i-python, which helps to build the source code into a Docker image and push it to the quay.io registry. Go back to the pipeline builder view, hover the mouse pointer over the git-clone task, and click the “+” sign to the right of the git-clone task to add a new task.
You can see a new Select Task drop-down list created. From this list, select the s2i-python task.
Note: In this example, we are selecting the s2i-python task because the application code in the GitHub repository is written in Python language.
Notice that a new s2i-python task is created, and is placed after the git-clone task.
Click the s2i-python task, and in the properties pane that is displayed on the right side, and enter the following values for the available fields:
IMAGE = <URL of your quay.io repository/>:latest (‘quay.io/dpkshetty/demos:latest’ in my case)
Note: Docker images are always of the form, <name:tag>. The <tag> field is used to represent the variants of an image (such as different versions, different architecture, different releases, and so on). Here we are using the latest tag to specify that it is the latest version of the Docker image.
Workspaces = <select the workspace from drop-down list, you earlier created> (‘my-workspace’ in my case).
Retain the default values for the remaining fields.
Click Create in the pipeline builder view to create the pipeline with the git-clone and s2i-python tasks.
The pipeline details page is displayed.
In case you missed to enter data for any of the fields or wish to edit them, click Actions -> Edit Pipeline. On the Pipeline builder page, select the task you wish to edit and update its properties. After completing the updates, click Save to confirm the changes made.
5. Run the pipeline
On the Pipeline details page, click Actions -> Start.
On the Start Pipeline page, specify the following values:
In the my-workspace field, select VolumeClaimTemplate, which automatically creates a PersistentVolumeClaim (PVC) of 1 GiB and provision storage for our workspace area.
In the Advanced options section, expand Show Credential options.
We need to provide the quay.io credentials for the PipelineRun job to be able to access our quay.io account and push the Docker image. The credentials are provided as part of a OpenShift secret. To add the secret, click Add Secret.
Enter a name for the secret (quay-demos in my case), and in the Server URL and Registry server address fields, enter quay.io.
Retain the default values for the remaining fields.
Navigate to your quay.io robot account (created in the ‘Create a new quay.io repository to publish Docker image’ step above) page, click the robot account (dpkshetty+demos in my case) and copy the username and password from the quay.io page to the Username and Password fields in the OpenShift console respectively. Click the tick mark symbol to save the secret.
Notice that the newly secret appears on the Start Pipeline page.
Ideally at this point, you would click Start to run the pipeline. But at the time of writing this tutorial, OpenShift 4.8.x has a small bug in the secret creation process. The secret is malformed and needs to be corrected before we can start running the pipeline. So for now, click Cancel (no worries the secret created stays) and return back to the Pipelines page.
5a. Fix the secret bug (optional)
(This step is optional and can be skipped if your OpenShift cluster doesn’t have the secret bug)
Click the Secrets tab.
Search for your secret (quay-demos in my case).
For your secret entry, click Edit Secret.
Notice that there are multiple redundant and malformed entries in the secret file (that’s the bug). All entries (except the last entry) have the Username and Password fields empty and the Registry server address field incomplete (first entry has q, second entry has qu, and so on).
In this example, only the last entry (scroll down to the end of the page) is valid, and the rest all are invalid. Click Remove credentials for all incorrect entries.
Notice that there should be only one valid entry with all the fields populated as shown in the following screen capture. Click Save.
5b. Execute the pipeline
Click the Pipelines tab to view the Pipelines page.
Click the three vertical dots option next to the pipeline and click Start.
From the my-workspace drop-down list, select VolumeClaimTemplate and click Show Credential options. Ensure that the previously created secret (quay-demos in my case) exists. Click Start to run the pipeline.
On the PipelineRun details page, notice that the first task (git-clone) has started.
On the Logs page, view the logs for each step being executed as part of the PipelineRun job.
For each task, OpenShift creates a new pod and runs the task steps inside the pod. Click the TaskRuns tab to view the task runs associated with this PipelineRun, and the pods associated with each task.
Congratulations! You have successfully created a Docker image in quay.io from the GitHub source code using OpenShift Pipelines.
7. Validate the Docker image created in quay.io
To create a new application or pod in OpenShift using this newly created Docker image, navigate to your OpenShift console, click +Add and then click Container Images.
On the Deploy Image page, enter the required values in the following field:
Image name from external registry: Your quay.io Docker image URL (quay.io/dpkshetty/demos:latest in my case). After entering, press the Tab key and wait for OpenShift to validate the URL. You should see the Validated message below the URL which ensures OpenShift is able to view and access the Docker image URL.
In the General section, enter demos-app in the Application Name field, and demos in the Name field.
In the Resources section, select Deployment as the resource type to generate, and in the Advanced options section, select the Create a route to Application checkbox.
Optionally, specify the options for a secure route (refer to the following note for details).
Note: The steps to add a secure route can be skipped if you are using an OpenShift cluster where HTTP routes are allowed. In my case, OpenShift on IBM Power Virtual Server mandates to use HTTPS (secure HTTP) routes and plain HTTP routes are not supported. Hence, I need to perform the following steps. If unsure, Check with your cluster administrator for further details.
Expand Show advanced Routing options.
Select the Secure Route checkbox.
From the TLS Termination drop-down list, select Edge, and from the Insecure traffic drop-down list, select None.
Click Create.
On the Topology view, you can see an icon for your application being deployed. Click the deployment (D demos) and in the corresponding properties pane, click the Resources tab and wait for pod to be in Running state.
In the Routes section, click the location URL.
Note: Depending on how your OpenShift cluster is configured, you may have a HTTPS or HTTP route (as explained earlier).
After successful completion, notice the welcome message from the Pyflask app in your browser window.
Also, check the other endpoints (such as /test and /version by appending it to the end of the URL) to validate if the entire application is working as expected.
Congratulations! The Docker image you have created using OpenShift Pipelines is working successfully. You can now share your quay.io Docker image URL (‘quay.io/dpkshetty/demos:latest’ in my case) with anyone in the world to create or run applications from your Docker image.
8. Automate Docker image build using OpenShift Pipeline triggers and GitHub webhooks
Triggers capture the external events and process them to extract key pieces of information.
A PipelineRun job must run automatically for any new code changes in the Git repository and that is how we can achieve automation of the pipeline we created earlier using OpenShift Pipelines.
Triggers automate this process by capturing and processing any change event and by triggering a PipelineRun job that deploys the new image with the latest changes from your Git repository.
8a. Set up pipeline triggers
In the Pipelines view click the three vertical dots icon next to the pipeline, and click Add Trigger.
On the Add Trigger page, enter the values for the following fields:
From the Git Provider type drop-down list, select github-push.
From the my-workspace drop-down list, select VolumeClaimTemplate.
Then click Add.
On the Pipelines page, click the pipeline.
On the Pipeline details page, you can see that the event listener HTTP route URL has been created. Event listener is a component of pipelines trigger that listens to the external events.
Copy the HTTP URL (applicable only if your OpenShift cluster supports HTTP routes, and if not, navigate to the Create a event listener HTTPS route section and copy the HTTPS URL) and save it for later use. This URL (HTTP or HTTPS as applicable) will be used as a payload URL in GitHub webhooks setup (you can find details in the subsequent topics).
8b. Create a event listener HTTPS route URL (optional)
Note: The steps to add a secure event listener route can be skipped if you are using an OpenShift cluster where HTTP routes are allowed. In my case, OpenShift on Power Virtual Server mandates to use HTTPS (secure HTTP) routes and plain HTTP routes are not supported. Hence, I need to perform the following steps:
Switch to Administrator persona, and click Networking -> Routes. On the Routes page, you can see a route entry, named el-event-listener-xxx, representing the event listener object, the associated HTTP route URL, and the corresponding event listener service object to which this route maps to.
Click Create Route and enter the required values for the following fields:
In the Name field, enter my-https-route (or any name as you wish).
From the Service drop-down list, select the existing el-event-listener-xxx service.
From the Target port drop-down list, select 8080 -> 8080 (TCP).
Select the Secure Route checkbox. In the same section, from the TLS termination drop-down list, select Edge and from the Insecure traffic drop-down list, select None.
Retain the default values for the remaining fields, scroll down, and click Create.
Notice that a new route has been created with the HTTPS route URL.
Save this HTTPS URL for later use.
8c. Set up GitHub webhooks
GitHub webhooks allow external services to be notified when certain GitHub events happen. We are interested in the git push event. In this procedure, we will configure GitHub webhooks with the event listener HTTP or HTTPS (as applicable) URL as the payload URL, such that changes to the GitHub source code will be notified to event listener which will help trigger a newPipelineRun job.
On the Add webhook page, enter the necessary data in the following fields:
Note: GitHub might prompt you to authenticate one more time. if so, log in with your GitHub credentials.
In the Payload URL field, enter the HTTP or HTTPS (as applicable) event listener route URL (in my case it is a HTTPS URL).
From the Content type drop-down list, select application/json.
You can retain the default values for the remaining fields, and then click Add webhook.
A new webhook will be created (the last entry in case you have multiple webhooks). It will have a tick-mark beside it (you may have to refresh the webhooks page in case you don’t see it automatically), which indicates that GitHub is able to ping or connect to your OpenShift cluster using the event listener HTTP or HTTPS (as applicable for your cluster) route URL.
Congratulations! GitHub webhook is now connected with your OpenShift cluster and any changes to the GitHub source code will trigger a newPipelineRun job.
9. Verify if a GitHub code change creates a new Docker image
Let’s make a small code change in our GitHub repo and check if it indeed creates a new PipelineRun job, which in turn creates a new Docker image of our application.
Navigate to your GitHub source repo, click app.py, and edit it by clicking the pencil icon.
In the edit mode, make the following two changes:
Modify the welcome message by adding Pipelines to make it ‘Pyflask Pipelines Demo’.
Upgrade the version to 2.0.
Scroll down to add a brief description for the changes made and click Commit Changes.
When committed successfully, a new PipelinRun job is triggered. Navigate to your OpenShift console, click Pipeline, and then click your pipeline. In the Pipeline details view, click the PipelineRuns tab.
Notice that a new PipelineRun job is running.
Wait for the PipelineRun job to finish.
Note: While it is running, you may want to click the new PipelineRun job and view the logs to monitor the git-clone and s2i-python tasks that run as part of this new job (as we did before).
Switch to quay.io in your browser and verify that a new Docker image is pushed to the registry.
Note: If you already have the quay.io registry page opened, refresh the page to see the latest information.
Notice that a new image is just pushed!
Congratulations! You have successfully used GitHub webhook and the pipelines trigger functionality to auto build and deploy a new Docker image in the event of a GitHub source code change.
10. Validate if the new version of the Docker image works
Perform the steps mentioned in the Validate the Docker image created in quay.io section to create a new application and verify that the application has the new code changes. Refer to a sample in the following screen capture.
The new welcome message:
The new version of the application:
Congratulations! This verifies that the new Docker image built from an automated PipelineRun job triggered by the GitHub source code change reflects the updates!
Summary
In this tutorial, you learnt how to create a simple pipeline to build and deploy a Docker image from the GitHub source code by mainly using the OpenShift GUI and without YAMLs and CLIs! You also learnt how to use the Pipeline Trigger functionality along with GitHub webhooks to automate the Docker image creation process. DevOps engineers are always looking to automate their software build, test, deploy lifecycle and pipelines provide an excellent way to automate the software development lifecycle.
While we hardcoded the GitHub repo and the quay.io repository URLs for this tutorial (to keep things simple) it is possible to enhance the pipeline further by generalizing it. You can use pipeline parameters (also known as params) as an input method for specifying the GitHub and quay.io repository URLs and thus make the pipeline more reusable by providing the ability to specify GitHub and quay.io repository (instead of hardcoding it) as part of each pipeline run.
I will leave that as a recommended exercise for anyone interested. As a hint, you need to use the Add Parameter option in the “Parameters” section of pipeline builder page to create new parameters and reference them using the $(params.<param-name>) syntax when populating the Tasks’ properties. Refer to the Red Hat OpenShift documentation for more details. Good luck!
Acknowledgment
I would like to thank Sebastien Chabrolles for helping with queries specific to pipelines and issues I encountered while creating this tutorial, and especially helping in mitigating the secret bug which was causing the PipelineRun job to fail.
Take the next step
Join the Power Developer eXchange Community (PDeX). PDeX is a place for anyone interested in developing open source apps on IBM Power. Whether you're new to Power or a seasoned expert, we invite you to join and begin exchanging ideas, sharing experiences, and collaborating with other members today!
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.