Build and distribute a universal application image

Designing a well-built universal application image alone doesn’t ensure deployment success. You also need to follow best practices for running the build with industry standard tools and storing each image with a unique identifier. A build pipeline that builds and stores the image automates the process to make it repeatable and reliable.

This article offers best practices for running the build and storing the resulting image.

Better build pipelines

A build pipeline performs continuous integration in a continuous integration/continuous deployment (CI/CD) pipeline that provides continuous delivery as part of DevOps for a software development lifecycle. Every time the code for a software component changes, the CI pipeline builds the software, packages it for deployment, and deploys it into a development or test environment.

CI/CD pipelines:

  • Confirm that the latest code can be built and deployed
  • Ensure that the deployed component is always running the latest code
  • Perform quality checks on the code
  • Prevent deployment if the software falls below an acceptable level of quality

This article introduces two best practices that you should include in your build pipeline: Ensuring compliance with the Open Container Initiative (OCI) and uniquely tagging each image.

For each best practice, the article describes:

  • Requirements — The conditions necessary to perform the task
  • Red Hat Container Certification requirement — Any requirements expressed in or related to certification
  • Solution — How to solve the problem and meet the goal within the requirements
  • Cloud-Native Toolkit — Examples of the toolkit embedding the solution

1. Use OCI-compliant tools

It’s important to implement a build pipeline with tools that are compliant with the Open Container Initiative (OCI) to avoid vendor lock-in.

Requirement

The OCI is a neutral body that governs open industry standards for the container image format (image-spec), how images are stored and distributed (distribution-spec), and how images are run as containers (runtime-spec). Images built for OCI compliance avoid vendor lock-in. Any OCI-compliant container can run properly in any OCI-compliant container engine, such as containerd and CRI-O.

Red Hat Container Certification requirement

Although Red Hat does not require that images be built with specific tools, they do recommend a set of open source tools to build, transfer, and run OCI-compliant images: Buildah, Skopeo, and Podman. Note that these tools currently run most easily on Linux, with varying support for Windows or MacOS.

As described in Understanding OpenShift Container Platform development, these tools build and manage “industry standard container images that include features tuned specifically for ultimately deploying those containers” in OpenShift and Kubernetes. The same docs also explain that OpenShift v4 uses CRI-O as its default container engine.

You should use Buildah to build images and Skopeo to copy them between registries. You can also use Podman to run images locally, although that doesn’t matter when deploying the images to a container orchestrator. OpenShift supports using Buildah to build images as part of an integrated build process with all three tools.

Solution for using Buildah

Use Buildah’s bud command to build an image like this:

buildah bud -f Dockerfile .

bud stands for “build using dockerfile.” Buildah refers to the file that contains the build instructions as either the Dockerfile or the Containerfile. Regardless of the file name, they use the same syntax.

Example of the Cloud Native Toolkit’s build pipeline

Two tasks in the Cloud Native Toolkit’s build pipeline use Red Hat’s OCI-compliant tools:

  • ibm-build-tag-push builds the image using Buildah.
  • ibm-img-release pushes the image into the container registry using Skopeo.

For example, here’s the code in the Tekton pipeline ibm-nodejs that calls these two tasks:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: ibm-nodejs
  annotations:
    app.openshift.io/runtime: nodejs
. . .
  tasks:
  - name: setup
    taskRef:
      name: ibm-setup
. . .
  - name: build
    taskRef:
      name: ibm-build-tag-push
    runAfter:
      - test
. . .
  - name: img-release
    taskRef:
      name: ibm-img-release
    runAfter:
      - tag-release
    params:
      - name: image-from
        value: "$(tasks.setup.results.image-url)"
      - name: image-to
        value: "$(tasks.setup.results.image-release):$(tasks.tag-release.results.tag)"
. . .
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: ibm-build-tag-push
. . .
  - name: BUILDER_IMAGE
    default: quay.io/buildah/stable:v1.15.0
. . .
  buildah --layers --storage-driver=$(params.STORAGE_DRIVER) bud --format=$(params.FORMAT) --tls-verify=$(params.TLSVERIFY) -f $(params.DOCKERFILE) -t ${APP_IMAGE} $(params.CONTEXT)
. . .
  buildah --storage-driver=$(params.STORAGE_DRIVER) push --tls-verify=$(params.TLSVERIFY) --digestfile ./image-digest ${APP_IMAGE} docker://${APP_IMAGE}
  • The Tekton task ibm-img-release uses Skopeo to copy the image to the container registry:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: ibm-img-release
  annotations:
    description: Tags the git repository and releases the intermediate container image with the version tag
. . .
  IMAGE_FROM="$(params.image-from)"
  IMAGE_TO="$(params.image-to)"
. . .
  echo "skopeo copy --src-creds=xxxx --src-tls-verify=${IMAGE_FROM_TLS_VERIFY} --dest-creds=xxxx --dest-tls-verify=${IMAGE_TO_TLS_VERIFY} docker://${IMAGE_FROM} docker://${IMAGE_TO}"
  skopeo copy ${IMAGE_FROM_CREDS} --src-tls-verify=${IMAGE_FROM_TLS_VERIFY} ${IMAGE_TO_CREDS} --dest-tls-verify=${IMAGE_TO_TLS_VERIFY} docker://${IMAGE_FROM} docker://${IMAGE_TO}
. . .

2. Uniquely tag each image

When adding an image to a registry, tag the image to indicate the version of the software and the release of the build inside the image.

Requirement

An image’s tag should identify it uniquely from other images with the same name:

  • Images with the same name but different IDs have different contents, so their tags should be different.
  • Images for different versions of the same software need different tags.
  • Images for different builds of the same version of the software need different tags.

Red Hat Container Certification requirement

Red Hat Container Certification requires that an image have a tag (other than latest, which is automatic) that uniquely identifies that image from others with the same name. It suggests that the tag should commonly be the image version.

Solution for tagging your image

Use Docker’s tag command to add a tag to an image. An image can have multiple tags, which are aliases for the same image file.

For example:

  1. Build the image my-image.
  2. Tag it for the registry my-registry and the namespace my-namespace, and with the version 1.0 and the release 1.
  3. Tag it again without the release, for the latest release of that version.
  4. Push all tags of the image to the remote registry my-registry.

Your command should look something like this:

docker build -t my-image .
docker tag my-image my-registry/my-namespace/my-image:1.0-1
docker tag my-image my-registry/my-namespace/my-image:1.0
docker push --all-tags my-registry/my-namespace/my-image

This tags the image with two tags:

  • 1.0-1 specifies release 1 of version 1.0.
  • 1.0 specifies the latest release of version 1.0.

The registry should not already contain an image with the tag 1.0-1; if it does, the old image will no longer have that tag. If this were tag 1.0-2, the registry typically already contains an image with the tags 1.0-1 and 1.0. The new 1.0 tag replaces the old one in the registry, so whereas the 1.0 used to point to the 1.0-1 image, now it points to the 1.0-2 image. Likewise, you could even add a 1 tag to point to the latest image that is a 1.x version. For example, see all the tags for various builds of the node image in DockerHub and notice how a single build often has multiple tags that act as aliases for the same build.

The image’s identifiers in the registry should match its internal identifiers. Image Identification specifies that an image should contain these three identifying fields:

  • name — name of the image
  • version — version of the image
  • release — a number that’s used to identify the specific build for this image

To make the registry identifiers match, tag the image with these same identifier values:

docker tag <registry>/<namespace>/<name>:<version>-<release>

where <registry> and <namespace> specify where the image is stored.

Example of tagging in the Cloud-Native Toolkit

3 tasks in the toolkit’s build pipeline use the same tag for 3 related artifacts:

  • ibm-tag-release tags the release in the Git repo.
  • ibm-img-release tags the image in the container registry.
  • ibm-helm-release uses the tag in the filename for the Helm chart.

For example, the following code in the Tekton pipeline ibm-nodejs calls these 3 tasks:

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: ibm-nodejs
  annotations:
    app.openshift.io/runtime: nodejs
. . .
  tasks:
  - name: setup
    taskRef:
      name: ibm-setup
. . .
  - name: tag-release
    taskRef:
      name: ibm-tag-release
    runAfter:
      - health
. . .
  - name: img-release
    taskRef:
      name: ibm-img-release
    runAfter:
      - tag-release
    params:
       - name: image-to
        value: "$(tasks.setup.results.image-release):$(tasks.tag-release.results.tag)"
. . .
  - name: helm-release
    taskRef:
      name: ibm-helm-release
    runAfter:
      - img-scan
    params:
      - name: image-url
        value: "$(tasks.img-release.results.image-url)"
. . .
  • The Tekton task ibm-tag-release uses the NPM package release-it to tag a release in the Git repo with a new unique version number and returns that UID as the tag result:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: ibm-tag-release
  annotations:
    description: Tags the git repository with the next version release value
. . .
  npm i -g release-it
. . .
  NEW_TAG="$(git describe --abbrev=0 --tags)"
. . .
  echo -n "${NEW_TAG}" | tee $(results.tag.path)
  • The Tekton task ibm-img-release gets the image name including the tag passed in as image-to and uses that to tag the image and push it into the container registry:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: ibm-img-release
  annotations:
    description: Tags the git repository and releases the intermediate container image with the version tag
. . .
  IMAGE_TO="$(params.image-to)"
. . .
  echo "skopeo copy --src-creds=xxxx --src-tls-verify=${IMAGE_FROM_TLS_VERIFY} --dest-creds=xxxx --dest-tls-verify=${IMAGE_TO_TLS_VERIFY} docker://${IMAGE_FROM} docker://${IMAGE_TO}"
  skopeo copy ${IMAGE_FROM_CREDS} --src-tls-verify=${IMAGE_FROM_TLS_VERIFY} ${IMAGE_TO_CREDS} --dest-tls-verify=${IMAGE_TO_TLS_VERIFY} docker://${IMAGE_FROM} docker://${IMAGE_TO}
. . .
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: ibm-helm-release
  annotations:
    description: Publishes the helm chart to the helm repository using the version provided in the image-url
. . .
IMAGE_VERSION="$(echo "$(params.image-url)" | awk -F / '{print $3}' | awk -F : '{print $2}')"
  • Then it uses the image tag as part of the Helm chart’s filename chart-version.tgz:
echo "curl ${CURL_FLAGS} -u${HELM_USER}:xxxx -s -T ${CHART_NAME}-${IMAGE_VERSION}.tgz ${HELM_URL}/${IMAGE_NAMESPACE}/${CHART_NAME}-${IMAGE_VERSION}.tgz"
curl ${CURL_FLAGS} -u${HELM_USER}:${HELM_PASSWORD} -s -T ${CHART_NAME}-${IMAGE_VERSION}.tgz "${HELM_URL}/${IMAGE_NAMESPACE}/${CHART_NAME}-${IMAGE_VERSION}.tgz"

Summary

This article has described best practices for building and storing images, and showed you how to use a build pipeline to automate these two best practices:

  • Use OCI-compliant tools
  • Uniquely tag each image

Next, check out Implement and deploy a manageable application for best practices to make the pod and application easier to manage once deployed. If you would like to use the Cloud-Native Toolkit to build universal application images, please see Build images with the Cloud-Native Toolkit.