Win $20,000. Help build the future of education. Answer the call. Learn more

Best practices for designing a universal application image

When building container images, its important to make sure they run well on Kubernetes and other container orchestrators like Red Hat OpenShift. Building unique images for each orchestrator can be a maintenance and testing headache. A better idea is to build a single image that takes full advantage of the vendor support and security that’s built into OpenShift and also runs well in Kubernetes. A universal application image (UAI) is a container image that is built from a Red Hat Universal Base Image (UBI) and incorporates additional features that make it more secure and scalable in both Kubernetes and OpenShift.

This article introduces you to the best practices that you should incorporate into your Dockerfile when building a UAI. For each practice, we discuss the quality and its requirements, how to implement it in a container image, and what is needed, specifically, to meet the requirements for Red Hat Container Certification. We conclude with an example from the Cloud Native Toolkit that illustrates how this quality is realized in this open source development environment.

Components to include in your application image

A UAI includes the following components, each of which links to a related best practice in this article:

  1. Universal base image (UBI)
  2. Non-root, arbitrary user IDs
  3. Image identification
  4. Image license
  5. Latest security updates
  6. Group ownership and file permission
  7. Two-stage image builds
  8. Original base image layers
  9. Fewer than 40 image layers

The examples of how to solve each problem add up to a hypothetical Dockerfile like the following, which includes comments that label where each best practice is applied.

## 7. Two-stage image builds (stage 1: builder image)
FROM maven:3.6.3-jdk-11 as builder
WORKDIR /app
COPY pom.xml .
RUN mvn -e -B dependency:resolve
COPY src ./src
RUN mvn clean -e -B package


## 7. Two-stage image builds (stage 2: deployment image)
## 1. Universal Base Image (UBI)
FROM registry.access.redhat.com/ubi8/openjdk-11:1.3-15

## 2. Non-root, arbitrary user IDs
USER 1001  # Or USER default; or nothing, the UBI already set the user

## 3. Image identification
LABEL name="my-namespace/my-image-name" \
      vendor="My Company, Inc." \
      version="1.2.3" \
      release="45" \
      summary="Web search application" \
      description="This application searches the web for interesting stuff."

USER root

## 4. Image license
COPY ./licenses /licenses

## 5. Latest security updates
RUN dnf -y update-minimal --security --sec-severity=Important --sec-severity=Critical && \
    dnf clean all

USER default  # Or USER 1001

## 6. Group ownership and file permission
RUN chown -R 1001:0 /some/directory && \
    chmod -R g=u /some/directory

COPY --from=builder /app/target/*.jar app.jar
EXPOSE 8080
CMD ["java", "-jar", "app.jar"]

Let’s explore each of these best practices in detail.

1. Build your UAI from a UBI

The base image for your application provides the set of Linux libraries that the application’s runtime requires. The base image that you choose impacts how versatile, secure, and efficient your container is.

Build your UAI from a Red Hat Universal Base Image (UBI). Using a Red Hat UBI:

  • Enables your application image to run well in both Kubernetes and OpenShift
  • Is OCI-compliant and freely redistributable
  • Benefits from official Red Hat support when run in OpenShift

Requirements

The base image should meet these requirements:

  • All images to be deployed in a cluster should be built on the same base image or images with the same Linux libraries.
  • To ensure that the base image is compatible with the worker node’s operating system, the Linux libraries in the base image and the operating system running the worker nodes should be compatible versions of the same Linux distribution.
  • The base image should be freely redistributable across public and private registries without requiring any paid licensing.
  • The base image should run well in both Kubernetes and OpenShift.
  • While a Kubernetes worker node can run a variety of Linux distributions, an OpenShift worker node requires a Linux distribution that is compatible with Red Hat Enterprise Linux.

Red Hat Container Certification requirement

The base image’s Linux libraries must come from Red Hat Enterprise Linux. Red Hat’s RHEL base images and Universal Base Images meet this requirement. See base image options in the Red Hat docs for more information.

Red Hat’s base images are available from the Red Hat certified container images registry. The images in the ubi8 namespace incorporate libraries from a newer version of Red Hat Enterprise Linux than the ubi7 images. There are UBIs for several different language runtimes, including the following:

Language runtime Registry and repository
Java JDK 11 registry.access.redhat.com/ubi8/openjdk-11
Node.js v14 registry.access.redhat.com/ubi8/nodejs-14
Go (aka Golang) registry.access.redhat.com/ubi8/go-toolset
.NET 5.0 registry.access.redhat.com/ubi8/dotnet-50-runtime

If Red Hat doesn’t offer a UBI for the language runtime you need, start from the smallest ubi8 base image (registry.access.redhat.com/ubi8/ubi-minimal) and run the commands to install the language’s runtime.

The Partner Guide’s Containers and the Red Hat Universal Base Images (UBI) has more information from Red Hat about UBIs and related topics, like the availability of RHEL user space packages, distribution of images, and Dockerfile examples.

Red Hat also has a Red Hat Universal Base Images (UBI) ebook.

Solution for how to build from a UBI

In your Dockerfile, the FROM command should create your image from a ubi8 base image. If your build machine has an internet connection, it can download the base image directly from Red Hat’s registry like this:

FROM registry.access.redhat.com/ubi8/openjdk-11:1.3-15

Example from the Cloud-Native Toolkit

The Starter Kits in the Cloud-Native Toolkit build their images from UBIs downloaded from Red Hat’s registry. For example, the Dockerfile in the Node Typescript Starter Kit includes this line to start from the v8 base image for Node.js v12:

FROM registry.access.redhat.com/ubi8/nodejs-14:1-28

2. Design the image to run as a non-root user ID

If a process running as root breaks out of the container, it will have access to the host machine as root. By running as a non-root user, if the process breaks out of the container, its access on the host machine is much more limited. To make an image more secure, a good practice is to design the image to run as a non-root user ID.

Requirements

By default, Docker builds and runs an image as root (that is, uid=0). To avoid this, the Dockerfile for building the image should specify a user ID that is any ID other than 0. When Kubernetes runs the container, its processes will run as the user ID specified in the Dockerfile.

Red Hat Container Certification requirement

Red Hat recommends that the image specify a non-root user. When its container is run in OpenShift, the container orchestrator will definitely run its processes as a non-root user.

When building an image on a Red Hat UBI that includes a language runtime, the user is already switched to a non-root user called default. Make sure that your Dockerfile doesn’t switch the user back to root and leave it that way.

When OpenShift runs the container, default is a good way to think of the user because OpenShift will not run the container with the ID specified in the Dockerfile. Instead, OpenShift overrides the user ID specified in the Dockerfile and runs the process as an arbitrary user ID, such as 1000810000, using a different ID every time the container is run. Since this user ID has no corresponding identity outside the container, a process that breaks out of the container has few capabilities on the host machine.

Solution for specifying a user as non-root

You specify the user in a Dockerfile with the USER command:

USER 1001

As recommended in Dockerfile best practices, if you specify the user as its UID instead of a username then there’s no need to add the user or group name to the corresponding passwd or group file. However, if the base image sets a good non-root user then you should specify that user’s name. For example, a UBI defines a user named default.

Example from the Cloud-Native Toolkit

The Starter Kits in the Cloud-Native Toolkit build their images from language runtime UBIs, which already specify a non-root user.

3. Embed identifying information inside your image

You should build images that are clearly identifiable to make it easy for the user to determine the image name, who built it, and what it does. This information should be an immutable part of the image that cannot be separated.

Requirements

While meta information about an image can be stored outside of the image in an image registry or artifact repository, the risk is that information can also be changed as the image is moved from one place to another. The best practice is to embed identifying information inside the image so that the information will always travel with new copies of the image and cannot be changed.

Red Hat Container Certification requirement

Red Hat Container Certification requires that an image must contain these six pieces of information as labels in the image:

  • name — name of the image
  • vendor — company name
  • version — version of the image
  • release — a number that’s used to identify the specific build for this image
  • summary — a short overview of the application or component in this image
  • description — a longer description of the application or component in this image

For example, the Partner Guide’s Example Dockerfile for Container Application sets these labels, plus a maintainer label.

Red Hat sets these labels in each Red Hat UBI. For example, the ubi8/openjdk-11:1.3-15 UBI contains these settings:

$ docker images registry.access.redhat.com/ubi8/openjdk-11
REPOSITORY                                   TAG       IMAGE ID       CREATED       SIZE
registry.access.redhat.com/ubi8/openjdk-11   1.3-15    a9937ea40626   7 days ago    612MB
registry.access.redhat.com/ubi8/openjdk-11   latest    a9937ea40626   7 days ago    612MB
registry.access.redhat.com/ubi8/openjdk-11   1.3-10    9be97c3f2a59   7 weeks ago   612MB

$ docker inspect registry.access.redhat.com/ubi8/openjdk-11:1.3-15
[
    {
        "Id": "sha256:9be97c3f2a591c7c6d0c8829a870c1fce7d88cbd84aaa859844cc344d61fe6e7",
        "RepoTags": [
            "registry.access.redhat.com/ubi8/openjdk-11:1.3-15",
            "registry.access.redhat.com/ubi8/openjdk-11:latest"
        ],
        "ContainerConfig": {
            "Labels": {
                "name": "ubi8/openjdk-11",
                "vendor": "Red Hat, Inc.",
                "version": "1.3"
                "release": "15",
                "summary": "Source To Image (S2I) image for Red Hat OpenShift providing OpenJDK 11",
                "description": "Source To Image (S2I) image for Red Hat OpenShift providing OpenJDK 11",
                . . .
            }
            . . .
        },
        . . .
    }
]

Solution for labeling images in a Dockerfile

Labels are set in the Dockerfile using the LABEL command. For example, here’s how to set the labels required for certification:

LABEL name="my-namespace/my-image-name" \
      vendor="My Company, Inc." \
      version="1.2.3" \
      release="45" \
      summary="Web search application" \
      description="This application searches the web for interesting stuff."

Example from the Cloud-Native Toolkit

The Starter Kits in the Cloud-Native Toolkit build their images with these labels. For example, the Dockerfile in the Node Typescript Starter Kit includes these lines to set the labels with default values:

LABEL name="ibm/template-node-typescript" \
      vendor="IBM" \
      version="1.0.0" \
      release="1" \
      summary="This is an example of a container image." \
      description="This container image will deploy a Typescript Node App"

4. Add license information to an image

An image should include the licenses that govern the use of the software it contains. This information should be an immutable part of the image that cannot be separated.

Requirements

No industry standard exists for how licensing information should be bundled with software. However, the text files for licenses can easily be stored in an image. Doing so makes an image self documenting: The user doesn’t have to know the software’s licenses or search for them; they’re right there in the image.

GitHub can display the license for a repository.

  • Licensing a repository gives information about license types and encourages the owner of any open source public repository to specify a license.
  • Adding a license to a repository explains that GitHub will be able to detect and display the license for a repository if the license file is stored in the repository’s home directory and named LICENSE or LICENSE.md (with all caps).

Red Hat requires that the image store the license file(s) in the /licenses directory. It’s convenient to create a corresponding licenses directory in the repository’s home directory that the Dockerfile will copy as-is into the image. One way to accommodate these two different approaches by GitHub and Red Hat is for your repository to store two copies of your license file, one in LICENSE for GitHub and another in licenses/LICENSE.txt for Red Hat.

Red Hat Container Certification requirement

Red Hat Container Certification requires that an image specify the licenses needed to legally use the image. The licenses are text files that describe all relevant licensing and/or terms and conditions, including those for open source components that are included in the image. Add the software licenses to the /licenses directory in the image’s root directory.

For example, the Partner Guide’s Example Dockerfile for Container Application creates this internal /licenses directory from an external directory. Each file name in the directory must include an extension indicating its type, typically a text file with a .txt extension such as LICENSE.txt.

Solution for adding license information to an image

The source code directory that contains the Dockerfile should also include a licenses directory that contains these licensing files. Typically, it contains at least one file with a name like LICENSE.txt. The directory looks like this:

$ ls -ld licenses Dockerfile
-rw-r--r--  1 bwoolf  staff  774 May  5 15:07 Dockerfile
drwxr-xr-x  3 bwoolf  staff   96 May  5 15:09 licenses
$ ls -l licenses
total 8
-rw-r--r--  1 bwoolf  staff  17 May  5 15:10 LICENSE.txt

This code in the Dockerfile adds this licenses directory to the image:

COPY licenses /licenses

Example from the Cloud-Native Toolkit

Each Starter Kit is stored in a Git repository hosted in GitHub as a template. Because each Starter Kit is stored in a GitHub repository, each one includes a license file that GitHub will detect and display. For example, the Node Typescript Starter Kit includes a LICENSE file in the home directory. Before building your image, add any other necessary licensing files to this licenses directory.

Then the starter kit’s Dockerfile includes this line to add the /licenses directory to the image:

COPY licenses /licenses

This adds all of the files in the local licenses directory into the image’s /licenses directory.

5. Build your image with the latest security updates

Do not distribute images that contain known security vulnerabilities. When building an image, ensure that all known security vulnerabilities are patched. If new security vulnerabilities are discovered that affect the image, rebuild the image with the latest security patches to create a new release.

Requirements

The Linux libraries in an image should contain the latest security patches that are available when the image is built. There are a couple of strategies for doing this:

Use the latest release of a base image — This release should contain the latest security patches available when the base image is built. When a new release of the base image is available, rebuild the application image to incorporate the base image’s latest fixes.

Conduct vulnerability scanning — Scan a base or application image to confirm that it doesn’t contain any known security vulnerabilities. Commonly used scanning tools include Trivy, Clair, and Vulnerability Advisor.

Apply patches — Update the Linux components in an image using the operating system’s package manager, such as: yum for Red Hat Linux, dpkg for Debian Linux, apk for Alpine Linux, or apt for Ubuntu Linux. The Dockerfile can run the package manager as part of building the image.

Red Hat Container Certification requirement

Red Hat Container Certification requires that the Red Hat components in the container image cannot contain any critical or important vulnerabilities at the time that it is certified. See Understanding Red Hat security ratings for an explanation of these security levels. Known security vulnerabilities are called Common Vulnerabilities and Exposures (CVEs). Use RPM Package Manager (RPM) and Yellow Dog Updater, Modified/Dandified YUM (YUM/DNF) to update the components in Red Hat-based Linux systems.

To update the Red Hat components with security fixes that are not already installed, use this command in the Dockerfile for your image:

yum -y update-minimal --security --sec-severity=Important --sec-severity=Critical

For example, the Partner Guide’s Example Dockerfile for Container Application runs that yum command.

Each UBI includes a package manager that’s already installed:

  • The ubi images include the yum command.
  • The ubi-minimal images include the dnf and microdnf commands instead of yum. (microdnf is the tiny yum replacement that’s used to install RPM packages on the minimal images.)
  • Dockerfiles building from a UBI with a built-in language runtime run as a non-root user, often a user named default. Only the root user can run yum, dnf, or microdnf, so the script must switch to the root user before updating packages and should switch back afterward. (See the sample code in the Solution section below.)

See “Adding software to a running UBI container” for more details about running the package manager that’s built into a UBI.

One benefit of Red Hat-certified container images, including UBIs, is that Red Hat scans the images and posts a lot of information about them. You can use this information to make better-informed decisions about the images you’re using. As shown in the Red Hat certified container images registry, the information includes a Security page that shows a health index grade for the image and any known security vulnerabilities (documented as CVEs) contained in it. For example, here is the security summary for the ubi8/openjdk-11:1.3-15 UBI:

Security tab for openjdk-11 UBI

Notice that the page says the UBI:

  • Has a health index of A
  • Does not have any unapplied critical or important security updates
  • Does not contain known unapplied security advisories

This means that an application image that’s built from this UBI shouldn’t need any security updates (critical or important ones), that the yum command to update security (shown earlier) shouldn’t find any updates to install and therefore can be commented out in the Dockerfile, and that the application image should pass the two certification checks for missing security updates.

Note that the health index for an image isn’t permanent, but is based on the security vulnerabilities that are known at the time the image is scanned. As new vulnerabilities are discovered, scanning the image again may reveal vulnerabilities that weren’t detected previously, which can lower its health index. Furthermore, very new security vulnerabilities may not be listed because they’ve been discovered since the last scan.

To see the health index for all of a UBI’s versions and builds, select the “view full history” tag (shown as “all”). For example, here is the history page for the ubi8/openjdk-11 UBI, tags 1.3-8.1608081508 through 1.3-15:

History for openjdk-11 UBI

A newer build might have a lower health index than an older one because the new one includes a new package version that subsequently turns out to contain a security vulnerability. For example, in the list of ubi8/openjdk-11 UBI builds above, notice that build 1.3-8.1608081508 of the UBI has an A rating, then build 1.3-9 has a C rating. That’s because the latter contains documented vulnerabilities CVE-2021-3449 and CVE-2021-3450. Those are fixed in build 1.3-11, but then it adds CVE-2021-20305, which is fixed in build 1.3-15.

You can perform your own vulnerability scanning within an OpenShift cluster using the Container Security Operator (CSO), which uses Clair for scanning. See Scanning pods for vulnerabilities.

Solution for building an image with the latest security updates

Build your application image from the latest release of a UBI, which should include components with the latest security patches.

If a UBI needs newer components because they contain newer security patches, use the RUN command to update the UBI with the latest security updates like this:

FROM registry.access.redhat.com/ubi8/openjdk-11:1.3-15
USER root
RUN dnf -y update-minimal --security --sec-severity=Important --sec-severity=Critical && dnf clean all
USER default

The UBI already contains the latest security patches that are available at the time the image was built, but this updates any that are newer than the image.

Example from the Cloud-Native Toolkit

The Starter Kits in the Cloud-Native Toolkit build their images with the latest security updates. For example, the Dockerfile in the Node Typescript Starter Kit includes this line to install the latest security updates:

USER root
RUN dnf -y update-minimal --security --sec-severity=Important --sec-severity=Critical && dnf clean all
USER default

Notice that it uses the dnf command rather than yum because it builds from a ubi-minimal image.

6. Set group ownership and file permission

If a process needs to access files in the local file system, the process’s user and group should own those files so they are accessible. For OpenShift, the user is assigned arbitrarily and is always a member of the root group, so you should assign the root group ownership of the local files so that the arbitrary user will have access.

Requirements

Stateless workloads do not need access to files in the local file system because they don’t store any data locally. Stateful workloads do store data locally so they need access to those files in the local file system

A process in a stateful workload can store data in a file system for its own private use or to share with other stateful processes. To make the data shareable, all of the processes need to run as the same user or group and the files need to be accessible by that user or group.

For Kubernetes, specify the user and group in your container image that will run the process; this gives the process access to the files that are owned by that user/group.

Red Hat Container Certification requirement

Red Hat Container Certification does not require or exclude setting group ownership and file permission.

Pods run in an OpenShift cluster as arbitrary user IDs. All of these user IDs are members of the root group. OpenShift Container Platform-specific guidelines in the OpenShift docs specify the following:

For an image to support running as an arbitrary user, directories and files that are written to by processes in the image must be owned by the root group and be read/writable by that group. Files to be executed must also have group execute permissions.

Similarly, Adapting Docker and Kubernetes containers to run on Red Hat OpenShift Container Platform specifies that group ownership and file permissions must be set.

If a process shares files with other processes and therefore needs to run as the specific user or group that owns those files, its pod must define a security context that species the user and group. Also, the cluster must define a set of security context constraints that allow that user and group to be specified. For details, see Getting started with security context constraints on Red Hat OpenShift.

Solution for setting group ownership and file permission

The Dockerfile sets the permissions on the directories and files that the process uses. The root group must own those files and be able to read and write them as needed. The code looks like this, where /some/directory is the directory with the files that the process needs to access:

RUN chgrp -R 0 /some/directory && \
    chmod -R g=u /some/directory

For compatibility with Kubernetes, the Dockerfile should specify a non-root user ID, then set file ownership to that user ID and the root group:

USER 1001
RUN chown -R 1001:0 /some/directory

These two approaches combined work for both Kubernetes and OpenShift:

USER 1001
RUN chown -R 1001:0 /some/directory && \
    chmod -R g=u /some/directory

For example, if the Cassandra database is configured to store its data in the /etc/cassandra directory, the Dockerfile to build the image for OpenShift needs this statement:

USER 1001
RUN chown -R 1001:0 /etc/cassandra && \
    chmod -R g=u /etc/cassandra

Example from the Cloud-Native Toolkit

The toolkit’s images are stateless by default, so they do not support stateful workloads and local storage by default. You must modify the template Dockerfile to add this local storage configuration.

7. Use two-stage image builds

Compact images allow for greater container density to use hardware capacity more efficiently. Therefore, while the deployment image must contain the application and its language runtime, do not add any tools that are used to build the application or any other libraries that are not needed by the running application. Instead, use a two-stage Dockerfile that uses separate images — one image to build artifacts and another to host the application.

Requirements

While not a requirement, the multi-stage build technique is a common best practice for optimizing the size of an application’s deployment image.

Red Hat Container Certification requirement

Red Hat Container Certification does not require or exclude the use of a two-stage Dockerfile.

Solution for building an image in multiple stages

To build an image in multiple stages, the Dockerfile specifies multiple FROM lines, one at the beginning of each stage. The last stage produces the resulting image file, and the build process discards the images from the earlier stages. Typically, every stage but the last is given a name which makes it easier for a later stage to refer to an earlier stage’s artifacts.

For example, a multi-stage build typically consists of two stages, one that builds application artifacts and another that builds the application image. The first stage is typically named builder. Use multi-stage builds shows this example:

FROM golang:1.7.3 AS builder
WORKDIR /go/src/github.com/alexellis/href-counter/
RUN go get -d -v golang.org/x/net/html  
COPY app.go    .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .

FROM alpine:latest  
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /go/src/github.com/alexellis/href-counter/app .
CMD ["./app"]

Here’s how the stages work:

  • The first stage is named builder and starts from a Go image (which has the Go build tools already installed).
  • The second stage, which builds the deployment image, starts from an Alpine image and does not need a name.
  • The first stage builds the app named app in the directory named /go/src/github.com/alexellis/href-counter/.
  • The second stage copies the file /go/src/github.com/alexellis/href-counter/app from the builder stage and into the deployment stage’s current directory.

Example from the Cloud-Native Toolkit

Many of the Starter Kits in the Cloud-Native Toolkit build their images in two stages. For example, the Dockerfile in the Node Typescript Starter Kit includes these two stages:

FROM registry.access.redhat.com/ubi8/nodejs-14:1-28 AS builder
. . .
RUN npm run build

FROM registry.access.redhat.com/ubi8/nodejs-14:1-28
COPY --from=builder /opt/app-root/src/dist dist
COPY --from=builder /opt/app-root/src/public public
COPY --from=builder /opt/app-root/src/package*.json ./
. . .

Both stages start from the same base Node.js v14 image. The first stage builds the NPM artifacts in the builder image and the second stage copies them into the deployment image.

8. Maintain original base image layers

When building an application image, do not modify, replace, or combine the packages or layers in the base image. However, there is one exception: The build process can and should update the security packages in the Linux libraries with the latest updates.

A container image’s metadata should clearly show that the image includes the layers of the base image, has not altered them, and has only added to them.

Requirements

The build process should treat the base image as immutable. Modifications can affect the validity of the function of the image.

Red Hat Container Certification requirement

Red Hat Container Certification requires that the layers in the Red Hat base image must not be modified. When the base layer uses a Red Hat UBI, Red Hat does support the use and extension of the UBI layer.

For more information, see Red Hat Container Support Policy, particularly the sections:

  • Red Hat Certified ISV partner product image running on a RH provided Container Platform
  • ISV partner product image (not Red Hat Certified) running on a Red Hat Supported Container Platform

Solution for maintaining original image layers in your Dockerfile

A Dockerfile normally builds from a base image and adds to it with new layers. An application should run on top of its operating system but not replace any of it. As long as your Dockerfile does this, the requirement is met.

Example from the Cloud-Native Toolkit

The Toolkit’s Dockerfiles and build pipeline do not modify the UBI.

9. Limit the images you build to fewer than 40 layers

Layers in an image are good, but too many add complexity and hurt efficiency. Limit the images you build to about 5-20 layers (including the base image’s layers). 30 layers is acceptable. 40 or more layers becomes too many to manage easily.

Requirements

The number of layers in an image depends on how the image is built. To list the layers in an image, use the Docker CLI:

docker history <container image name>

Or use Podman:

podman history <container image name>

Red Hat Container Certification requirement

Red Hat Container Certification requires that the image contain fewer than 40 layers. Red Hat’s UBIs have very few layers, enabling your build process to add many more layers without exceeding 40.

For example, the ubi8/ubi-minimal:8.3-298.1618432845 image has two layers:

$ docker pull registry.access.redhat.com/ubi8/ubi-minimal:8.3-298.1618432845

$ docker history registry.access.redhat.com/ubi8/ubi-minimal:8.3-298.1618432845
IMAGE          CREATED       CREATED BY   SIZE      COMMENT
332744c1854d   13 days ago                4.7kB
<missing>      13 days ago                103MB     Imported from -

As another example, the ubi8/openjdk-11:1.3-15 is built from ubi-minimal and only adds one more layer, for three total:

$ docker pull registry.access.redhat.com/ubi8/openjdk-11:1.3-15

$ docker history registry.access.redhat.com/ubi8/openjdk-11:1.3-15
IMAGE          CREATED       CREATED BY   SIZE      COMMENT
a9937ea40626   7 days ago                 509MB
<missing>      13 days ago                4.7kB
<missing>      13 days ago                103MB     Imported from -

Solution for limiting layers in your images

Most statements in a Dockerfile create layers, so limit the number of statements in yours. For guidance on building images with fewer layers, see Creating images.

Example from the Cloud-Native Toolkit

The toolkit’s Dockerfiles are fairly simple and add just a few layers to the image.

Summary

By following these 9 best practices, you can build an image that is high quality and efficient, runs well in both Kubernetes and OpenShift, is supported by Red Hat when run in OpenShift, and is designed to pass Red Hat Container Certification.

Now that you know what qualities an image should have, check out Build and distribute a universal application image for best practices on running the build, and Implement and deploy a manageable application for best practices to make the pod and application easier to manage once deployed.

Legend