GitLab is a web-based Git repository that provides free public and private Git repositories, issue-following capabilities, and wikis. It is a complete DevOps platform that enables developers to perform all the tasks in a project—from project planning and source code management to monitoring and security. It allows teams to collaborate and build better software.
One of the most important features GitLab provides is continuous integration/continuous deployment (CI/CD), which developers can use to build, test & deploy their software whenever they push any code changes to their application.
In this tutorial, we will use GitLab to create a small CI pipeline which will create container images of our sample application for ppc64le (IBM Power) and x86 (Intel) hardware architectures concurrently and push the images to the Quay.io container registry. Because container images are hardware (HW) specific, we need one image per HW architecture. This causes issues while automating and/or sharing images because we need to know the HW architecture beforehand in order to pick/serve the right image for our application to be deployed successfully across different Red Hat OpenShift clusters.
This tutorial shows you how to solve the multi-architecture multi-image problem by creating container manifests that will automatically serve the right container image based on the OpenShift cluster’s HW architecture. This ensures we need to deal with only one container image across OpenShift clusters of different HW architectures.
Solution architecture
The solution architecture for creating a multi-arch container image using GitLab CI pipeline is depicted below:
Figure 1: Using Gitlab-CI pipeline across multiple OpenShift/Kubernetes clusters.
As depicted in the picture above, the GitLab Runner applications installed in the respective OpenShift clusters connect with the GitLab server hosting your application code and GitLab CI pipeline (.gitlab-ci.yml). Learn more about GitLab Runner.
The CI pipeline gets triggered whenever a change is made to the pipeline itself and/or application code. This pipeline trigger will cause the OpenShift cluster to build the HW architecture-specific container image(s)—x86 image and ppc64le image in our case—and push them to the container registry (Quay.io in this case). Eventually, the pipeline will combine the different (HW-specific) container images and create a multi-architecture (single) image which can be used across x86 and ppc64le OpenShift clusters. This saves the developers and operations team from dealing with multiple container images for an application.
This tutorial defines each of the above steps while providing detailed explanation of the CI pipeline YAML file and how it works. Designed to be simple to follow, this tutorial uses only the OpenShift GUI/console as much as possible.
Prerequisites
The following are the prerequisites for this tutorial:
Because we are building multi-arch container images, we need access to two Red Hat OpenShift Container Platform (OCP) clusters, running on different HW architectures. In this example, I have used OpenShift version 4.10.xx on IBM Power Virtual Server (PowerVS) – ppc64le arch and OpenShift version 4.10.xx on VMware on IBM Cloud – x86_64 arch.
Both the clusters will need cluster-admin access, as we intend to install the GitLab Runner Operator, which is needed to run GitLab CI pipelines, unless it is pre-installed.
Valid login credential is required to access GitLab and Quay.io.
Familiarity with Quay.io and GitLab is a must.
Estimated time
One hour to create a new GitLab pipeline, configure it, and execute it to create a single multi-architecture Docker image.
Because this is a public repo, readers interested in following this tutorial should clone/import this repo into their respective GitLab profile as we need author permissions to edit/update files in the repo while following the steps in this tutorial.
I’ll be using Quay.io for the container repository, where we’ll store our container images.For more details, refer to my Quay.io repository. Readers interested in following this tutorial should create their own Quay.io repository to use with this tutorial. They should also create a robot account for their repository. The robot account should have write permissions to their repository (“demos” in my case). Save the robot account credentials in a safe place as you’ll need it while specifying the credentials to push the image to this repository via the GitLab CI pipeline (covered later).
Note: If you are new to using Quay.io – you can refer to the step 3 in this tutorial on how to create a new Quay.io repository and create robot account to provide write permissions.
Step 2. Install GitLab Runner Operator in OpenShift.
Note: The steps below are shown for the IBM Power (ppc64le arch) OpenShift cluster. Repeat the same on the Intel (x86) cluster.
Switch to the Administrator persona.
Click OperatorHub and in the text field, enter runner.
Click the first GitLab Runner listing (ignore the community version of it).
On the Install Operator screen, ensure that All namespace on the cluster (default) is selected. Retain everything else to default values and then click Install.
Wait for the GitLab Runner Operator to get installed. It takes a few minutes. Click Installed Operators and on the Installed Operators page notice that GitLab Runner is now displayed, indicating successful installation.
Congratulations! You have successfully installed GitLab Runner Operator
Step 3. Create GitLab Runner instance in OpenShift.
Note: The steps below are shown for IBM Power (ppc64le arch) OpenShift cluster. Repeat the same on the Intel (x86) cluster.
Create a new project. Click Projects and then click Create Project.
I’m creating a new project named tutorial. Enter tutorial in the Name field and click Create.
Switch to the Installed Operators view. Ensure that you are in the tutorial project, if not, click the Project drop-down list and select tutorial.
GitLab Runner needs a registration token to connect and authenticate with the GitLab server. So, lets create a secret with key/value pair to hold the GitLab registration token. Go to your GitLab project, click Settings, and then click CI/CD.
In the Runners section, click Expand.
Scroll to the Specific runners section and copy/save the token for future use. This is the token that will be used by the runner instance (which we will create further in the tutorial) to register/connect with the GitLab server. We don’t plan to use Shared runners in this tutorial, so disable them.
Now let’s create a secret to hold this registration token. In the OpenShift console, click Workloads -> Secrets. From the Create drop-down list,select Key/value secret. Ensure that you are in the tutorial project namespace, as the secret is created for this project only!
In the form that appears, fill the fields as below:
Secret name: dpk-gitlab-secret Key: runner-registration-token Value: <copy the registration token saved in step 3.6 here>
Click Create.
Congratulations! You have created the secret successfully!
Now, let’s create a new service account in OpenShift for use with GitLab Runner. We want to create our own service account to make it easy to assign the right roles and privileges for the runner to get enough permissions to run our CI pipeline tasks. Go back to your OpenShift console and click User Management -> ServiceAccounts and then click Create ServiceAccount.
Unfortunately, there is no GUI way of doing this. In the YAML page displayed, replace the default name example with dpk-gitlab-sa and click Create. This is the name of our service account (the following screen captures show the YAML file before and after the default name change).
Before name change:
After name change:
Congratulations! You have successfully created a service account.
Now we are ready to create a GitLab Runner instance. Click Operators -> Installed Operators and click GitLab Runner.
Click the GitLab Runner tab and then click Create Runner
(Note: I am using a ppc64le cluster, and hence the name. For the x86 cluster, I will have dpk-x86, so that it is easy to differentiate the runners from one another in the GitLab logs.)
GitLab URL: https://gitlab.com (leave as default)
Registration Token: dpk-gitlab-secret (the secret we created in step 3.8)
Concurrent: 10
Tags: openshift, ppc64le (Note: For the x86 cluster, the tags would be openshift, x86)
Serviceaccount: dpk-gitlab-sa (the service account we created in step 3.10) Retain the default values for all other fields.
Click Create.
In the Runners view, notice that the status shows Pending for some time, and later changes to Running. Note that it may take few minutes to change from Pending to Running, implying that the runner instance was successfully able to connect with the GitLab server (which we will verify in the next step).
Scroll down to the Runners section and click Expand.
You should be able to see your runner’s information (dpk-ppc64le-xxx in my case) in the Specific runners section, along with the tags you specified while creating the runner instance (openshift, ppc64le). This confirms that your OpenShift runner instance is connected with the GitLab repository successfully.
Note: Repeat the above steps for the x86 cluster as well. You should be able to see an additional runner instance with openShift, x86 as the tags (refer to the following screen capture)
Congratulations! You have successfully verified the connection between your OpenShift Runner instance and GitLab repository.
Step 5. Provide the right permissions to the OpenShift service account.
Let’s provide adequate permissions for this service account to be able to run CI tasks (which are containers themselves). We will associate this service account with the gitlab-runner-app-role and with the anyuid cluster-role. This ensures the service account has enough privileges to create different OpenShift objects/resources needed to run CI task containers with the right privileges.
Click User Management -> RoleBindings and then click Create binding.
In the resulting form, fill in the following fields with the given values:
Binding type: Namespace role binding
Name: add-anyuid-to-my-gitlab-sa (you can enter any name of your choice)
Namespace: tutorial
Role name: system:openshift:scc:anyuid
Subject: ServiceAccount
Subject namespace: tutorial
Subject name: dpk-gitlab-sa (the service account we created in step 3.10 )
Click Create.
The anyuid role binding provides the service account with the privilege to run container as any UID including root.
Create one more role binding to provide our service account with adequate privileges to create OpenShift objects/resources which are needed while running the CI pipeline. In the resulting form, fill in the following fields with the given values.
Binding type: Namespace role binding
Name: add-my-gitlab-sa-to-runner-app-role (you can enter any name of your choice)
Namespace: tutorial
Role name: gitlab-runner-app-role
Subject: ServiceAccount
Subject namespace: tutorial
Subject name: dpk-gitlab-sa (the service account we created in step 3.10)
Click Create.
Congratulations! You have successfully provided adequate privileges to the service account user to be able to run CI tasks (which are containers themselves) and execute the pipeline without any constraints.
Step 6. Configure GitLab server with Quay.io credentials.
In this tutorial, while using Quay.io as the Docker image repository, we’ve already created our own Quay.io repository and created a robot account with write permissions to the repository (refer Step 1.2)
The GitLab CI pipeline needs to know the user and password credentials to be able to push the Docker images that get created as part of the pipeline job execution. Let’s create a protected variable in our GitLab repository, to store the Quay.io repository’s username and password credentials. These credentials will be referenced in the actual pipeline code which we will review shortly.
In your GitLab repository, click Settings -> CI/CD.
In the resulting form, fill in the following fields with the given values.:
Key: quay_user (Use the same name as the CI pipeline code uses this variable)
Value: <paste your quay.io repository's robot account's (created in step 1.2) username>
Flags: Protect variable (selected by default) and Mask variable
Click Add variable.
To store the password, repeat the previous steps. Click Add variable again, and in the resulting form, fill in the following fields with the given values.
Key: quay_passwd (Use the same name as the CI pipeline code uses this variable)
Value: <paste your quay.io repository's robot account's (created in step 1.2) password>
Flags: Protect variable (selected by default) and Mask variable (select this option)
Click Add variable.
You should end up with two variables as seen below. These two variables will be referenced in the CI pipeline YAML which we will review shortly.
Congratulations! You have successfully configured GitLab to be able to connect to your Quay.io Docker image repository.
Step 7. A peek at the GitLab CI pipeline YAML file.
In your GitLab repository (the one you cloned in step 1.1), click on the .gitlab-ci.yml file. This is GitLab’s pipeline YAML file and it is pre-populated with a basic pipeline that will take the sample pyflask application source code from the GitLab repository, build x86 and ppc64le architecture Docker images and then combine both the images to create a multi-arch manifest image, all of which are stored in the Quay.io image repository.
Notice the CI pipeline code hosted in the .gitlab-ci.yml file. There are broadly 5 sections, and I’ll explain what each section does (covered later).
Step 8. Customize CI pipeline to suit your environment.
Before we dig deep into understanding the pipeline YAML file, let’s customize the pipeline YAML file to your environment. Thankfully the only section that needs to be edited/updated is the variables section. It is currently populated with my environment details as shown.
Please edit/update the fields of the .gitlab-ci.yml file in your repository to customize it to for your environment:
IMAGE_REGISTRY: quay.io/dpkshetty/demos
Edit this and make it point to <your> Quay.io repository, the one which you created in Step 1.2. I am using demos as my repository.
TAG: gitlab-pyflask
This is used as the tag for the Docker image being created as part of the container-build stage of the pipeline. You can give any tag of your choice . As you will learn further in the tutorial, the repository name (demos in my case) and the tag (gitlab-pyflask in my case) is used to create the architecture-specific Docker image (demos:gitlab-pyflask-x86 and demos:gitlab-pyflask-ppc64le in my case) stored in the Quay.io repository.
APP: dpk-pyflask
This is the name of the OpenShift deployment resource/object. You can pick any name of your choice . This gets used as the prefix for all the OpenShift deployments and its associated resources that gets created during the pipeline execution job.
Congratulations! You have successfully customized the GitLab CI pipeline YAML file to suit to your environment.
Step 9. Understanding the GitLab CI Pipeline YAML file.
The .gitlab-ci.yml file has 5 sections. To get an understanding of how the pipeline works, let’s look at what each section does.
Section 1: stages
This section defines the pipeline stages. We have two stages:
container-build, which builds the Docker image and pushes it to the Quay.io repository.
multiarch-push, which creates the multi-arch image and pushes it to the Quay.io repository.
Section 2: variables
This section defines the variables we use in other sections of the YAML file.
IMAGE_REGISTRY – URL of your Quay.io repository to store the Docker images created as part of the pipeline job.
TAG – image tag to use for the Docker images.
APP – Prefix used to name all the OpenShift objects/resources created as part of pipeline job.
Section 3: ppc64le-build
This section builds the container image (Docker image) for the ppc64le (IBM Power) architecture. This section has four components, explained below:
stage – specifies the pipeline stage which this section is part of.
tags – these are the tags which are used to select which OpenShift cluster will be picked to execute the pipeline job. These tags are used to pick the matching OpenShift Runner instance, hence we need to tag the runner instance appropriately when creating it in OpenShift (which we did in Step3.14)
image – specifies the Docker image to run this pipeline job. We use podman as the pipeline job’s base Docker image. Podman (the POD manager) is an open-source tool for developing, managing, and running containers on your Linux systems. It makes creating, managing, and working with Docker images very easy!
Note: As some users have encountered errors while using the standard podman images, we have switched to using a custom podman image in the GitLab CI pipeline. Learn more in the Troubleshooting step.
script – This is the pipeline job that gets executed as a Docker container (Pod) by the OpenShift Runner instance running on the selected (using tags) OpenShift cluster. There are broadly three tasks done here:
Use podman to build the Docker image for the ppc64le architecture.
Use podman to list the images.
Use podman to push the image to the Quay.io repository. Note the suffix ppc64le added to the image tag (also highlighted in the picture below) to denote that it’s a ppc64le architecture
This section builds the container image (aka Docker image) for the x86 (Intel) architecture. This section also has four components as described above. The only difference being, this will use the x86 OpenShift cluster (hence the x86 tag used) and create an x86 Docker image, hence the suffix -x86 is added to the image tag (also highlighted in the picture below).
This section builds the multi-arch Docker image and pushes it to the Quay.io repository. It has five components as explained below:
stage – specifies the pipeline stage which this section is part of.
tags – these are the tags that are used to select which OpenShift cluster will be picked to execute the pipeline job. These tags are used to pick the matching OpenShift runner instance. Here we just specify the openshift tag as it doesn’t matter which OpenShift cluster (x86 or ppc64le) is picked to create the multi-arch Docker image.
needs – This tells the GitLab server to ensure that this job is executed only after the x86 and ppc64le build jobs are completed successfully.
image – specifies the Docker image to run this pipeline job. We use podman as the pipeline job’s base Docker image. Podman (the POD manager) is an open-source tool for developing, managing, and running containers on your Linux systems. It makes creating, managing, and working with Docker images very easy!
Note: As some users have encountered errors while using the standard podman images, we have switched to using a custom podman image in the GitLab CI pipeline. Learn more in the Troubleshooting step.
script – This is the pipeline job that gets executed as a Docker container (aka Pod) by the OpenShift Runner instance running on the selected (using `tags`) OpenShift cluster. There are broadly four tasks we do here:
Create a manifest.
Add ppc64le Docker image to the manifest.
Add x86 Docker image to the manifest.
Create and push the multi-arch Docker image to the Quay.io repository, with -multiarch as the suffix for the image tag (also highlighted in the picture below).
Congratulations! You now have a basic understanding of how the GitLab CI pipeline works. Let’s execute this pipeline and see how it works!
Step 10. Executing the GitLab CI Pipeline
Prerequisite: The steps above covered how to setup the GitLab Runner instance for ppc64le architecture cluster and connect it to the GitLab server. Note: Ensure that the same steps have been followed to setup an x86 architecture cluster as well. Do not proceed to the next step unless you have successfully registered both ppc64le and x86 runners with the GitLab server and they are seen under the Specific runners section, as shown in your GitLab’s Settings -> CI/CD -> Runners.
Now that x86 and ppc64le runners are successfully registered with GitLab server, lets kick off a pipeline run! You can kick off a pipeline run manually or automatically (by changing some code in the repository). To keep things simple, lets kick off a manual run. Click CI/CD -> Pipelines and click Run pipeline.
In the resulting Run pipeline page, click Run pipeline.
You will be taken to the Pipelines page, where you should see the graphical representation of the pipeline being executed. It will show two (parallel) jobs under the container-build step, one each for the ppc64le and x86 case, and one job under multiarch-push step for creating the multi-arch Docker image, which is in line with the .gitlab-ci.yml file (refer Step 7).
You can click on each of the jobs (ppc64le and x86 builds) and witness the execution of the CI job for each architecture. The CI job will build the Docker image for that architecture and push it to the Quay.io registry. Wait for both the architecture-specific jobs to complete.
Once both the jobs under container-build is complete, it will then execute the job under multiarch-push, which will combine the architecture specific Docker images into a multi-arch Docker image and push that to the Quay.io registry.
Wait for all the pipeline jobs to complete successfully. Under CI/CD, click Pipelines and then click on the topmost pipeline that corresponds to the most recent pipeline run.
Once all the jobs are complete, the pipeline should look as shown. This signifies that your Gitlab CI pipeline executed all jobs successfully!
Congratulations! You have successfully executed your GitLab CI pipeline, thus creating a multi-arch Docker image and pushed it to the Quay.io registry.
Click on the Tags icon and you should be able to see three new Docker images created in the recent past, listed on the top of the page. These three Docker images correspond to the x86 architecure, ppc64le architecture and the multi-arch Docker images created by the successful execution of your GitLab CI pipeline.
Congratulations! If you see the above in your Quay.io repository, you have successfully managed the integration of GitLab with Quay.io and created the architecture-specific and multi-arch Docker images. Now let’s go ahead and validate that the multi-arch Docker image works as expected.
Step 12. Validate the multi-arch Docker image created in Quay.io.
We will validate the multi-arch Docker image by trying to create an application using the same Docker image in both x86 and ppc64le OpenShift clusters. Go to your x86 and ppc64le OpenShift clusters and create a new application using the multi-arch Docker image.
Note: You can refer to step 7 of this tutorial for detailed instructions on how to create a new application using a Docker image stored in Quay.io repository.
Of course, in my case, the Quay.io Docker image URL will be:
quay.io/dpkshetty/demos:gitlab-pyflask-multiarch
Your URL will be different based on your Quay.io login and repository used.
Here’s how the application shows up in my OpenShift clusters.
On the x86 cluster, I was able to spin up the application successfully using the multi-arch Docker image.
Similarly, on the ppc64le cluster too, I was able to spin up the application successfully using the multi-arch Docker image.
Clicking on the URL under Routes in each cluster, displays my application console. It proves that the multi-arch Docker image indeed works as expected! The same image automatically serves the right Docker image based on the hardware architecture of the OpenShift cluster in which the image is being deployed!
On the x86 cluster:
On the ppc64le cluster:
Congratulations! You have successfully deployed applications on x86 and ppc64le OpenShift clusters using a multi-arch Docker image.
Step 13. Troubleshooting
This tutorial was tested successfully on OpenShift v4.10. Some users have reported seeing the following uid_map permission error while running on OpenShift v4.11/v4.12:
$ podman --storage-driver=vfs build --isolation chroot -t $APP -f ./Dockerfile --no-cache
time="2023-04-03T09:21:40Z" level=warning msg="Using rootless single mapping into the namespace. This might break some images. Check /etc/subuid and /etc/subgid for adding sub*ids if not using a network user"
Error: cannot write uid_map: write /proc/55/uid_map: operation not permitted
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
Show more
To resolve this issue, we have built a custom multi-arch (x86 and ppc64le) podman image. Refer to this Git repo for the code and to this Quay.io repository for the podman image.
We have also switched to using the custom podman docker image in the GitLab repository referenced in this tutorial, as that works in the old and new OpenShift versions.
For the curious minds - the custom podman image takes the standard podman image and ensures it runs as USER podman and WORKDIR is set to /home/podman. Please note that the standard podman image available at Quay.io ends up running as root user, which doesn’t play well with the uid/gid mapping available in the standard podman image, resulting in the error above.
This tutorial gives a small glimpse into how GitLab can be used to create a CI pipeline where we can create multi-arch Docker images in a consistent and automated way. GitLab is the DevOps platform that empowers organizations to maximize the overall return on software development by delivering software faster and efficiently.
As an additional exercise, you can try to use the x86 Docker image (quay.io/dpkshetty/demos:gitlab-pyflask-x86 in my case) to deploy an application in the ppc64le OpenShift cluster and you can see how the Pod errors out, proving that Docker images are hardware architecture-specific! This necessitates the need to build and manage different Docker images for each hardware architecture, which adds complexity to the application lifecycle and Docker image registry management in OpenShift, especially when you have multiple OpenShift clusters in your environment.
Ability to create multi-arch Docker images in an automated way helps ease the whole application and image management in OpenShift.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.