Building high-quality container images and their corresponding pod specifications is the foundation for Kubernetes to effectively run and manage an application in production. There are numerous ways to build images, so knowing where to start can be confusing.
This learning path introduces you to the universal application image (UAI). A UAI is an image that uses Red Hat's Universal Base Image (UBI) as its foundation, includes the application being deployed, and also adds extra elements that make it more secure and scalable in Kubernetes and Red Hat OpenShift.
Specifically, a universal application image:
Is built from a Red Hat UBI
Can run on Kubernetes and OpenShift
Does not require any Red Hat licensing, so it's freely distributable
Includes qualities that make it run more efficiently
Is supported by Red Hat when run in OpenShift
The articles in this learning path describes best practices for packaging an application, highlighting elements that are critical to include in designing the image, performing the build, and deploying the application.
Video will open in new tab or window.
Skill level
Readers should have a general understanding of Kubernetes' concepts.
Outcomes
After completing this learning path, you will understand best practices for how to:
Write a Dockerfile with instructions for building the image. The article Best practices for designing a universal application image explains best practices for what kind of code to include in the Dockerfile so that it will build a UAI that runs well in both Kubernetes and OpenShift, is supported by Red Hat when run in OpenShift, and is ready for certification by Red Hat if desired.
Build the image and store it, which means running the Dockerfile in a build tool. The article Build and distribute a universal application image offers best practices to help you avoid vendor lock-in, optimize the image to run in a container orchestrator, and identify the version of the image and the software within it.
Make the running container and its application easier to manage, both by the cluster and by the operations staff. Use the article Implement and deploy a manageable application to learn best practices both in the deployment manifest and in the application's implementation that make the application and its container run better and enable the operations staff to more easily monitor and manage them.
You can apply these best practices in any order, and some can be used even if others are not.
Each best practice includes the requirements to implement the best practice, manual steps to meet the requirements, and details related to Red Hat Certification requirements. Additionally, an example in each section shows how the open source Cloud-Native Toolkit uses the practice. The toolkit is an asset to help perform continuous delivery of cloud-native applications. As part of its functionality, it builds UAIs before deploying them.
*Important: This article does not show the steps for producing an image, but it does describe the important attributes the image should have once it has been built.*
Before you get started
Before proceeding to the rest of the series, let's review some background information that helps explain the value of these best practices:
A Linux container image consists of three main parts and runs on a container host consisting of three main parts. Here is the stack of all six parts:
Linux container:
Application -- custom business functionality implemented in a programming language
Language runtime -- executable environment for the programming language
Linux libraries -- operating system modules in addition to the kernel required by the application language runtime
Container host:
Container engine -- executable for managing containers
Container runtime -- executable for running containers; part of container engine
Linux kernel -- operating system foundation that's shared by all containers running in the container engine
This diagram shows the stack:
The container image and container host must be compatible. The Open Container Initiative (OCI) is a vendor-neutral organization that creates open industry standards around container formats and runtimes. Any OCI-compliant container host can run any OCI-compliant container image, which prevents vendor lockin.
The container host consists of the Linux kernel and a container engine that includes a container runtime. The Linux kernel is a standard part of Linux. The container engine is responsible for pulling images and is the interface for users and container orchestrators to manage containers. The container engine uses the container runtime to run (i.e. start and stop) containers.
Common container engines include Docker, containerd, and CRI-O, as well as Podman, RKT, and LXD. Kubernetes can use any container engine that implements the Container Runtime Interface (CRI) API, including containerd, CRI-O, and docker. OpenShift v4 runs CRI-O as its container engine. There is no OCI specification for container engines, but containerd and CRI-O are CNCF projects (as are Kubernetes and Helm), which makes them de facto open standards independent of any single vendor.
OCI defines a standard for container runtimes, the OCI Runtime Specification, as well as a reference implementation, runc. Docker, containerd, and CRI-O all incorporate runc as their container runtime. This means that any of these engines runs containers exactly the same way because they run in the same runtime.
OCI also defines a standard for container images, the OCI Image Format Specification. Any OCI-compliant image will run correctly in any container host that is running any one of the popular container engines because they all incorporate an OCI-complaint container runtime. To produce OCI-compliant images, use an image building tool that builds OCI-compliant images, such as Docker or buildah. For more details, see Use OCI-compliant tools.
Within the image, you can use components from a variety of sources, but those components must be compatible. To maximize compatibility, you should use Linux libraries and a Linux kernel from the same Linux distribution. The worker nodes for a Kubernetes cluster should all run the same Linux distribution, and images for that cluster should be built with libraries from that same Linux distribution.
A Dockerfile (also known as a Containerfile) is a script of instructions for how to build an image. An image builder such as Docker or Buildah executes the instructions to build the image.
Like all images, a UAI is built using a Dockerfile. You can add instructions to the Dockerfile in order to build an image with the qualities of a UAI.
Container standards that are required for Red Hat support in OpenShift
OpenShift users want to use images that maximize the security and vendor support that Red Hat offers. UAIs use security features in OpenShift and Red Hat Enterprise Linux to protect the cluster from potentially rogue container processes. UAIs also preserve a stack of components that Red Hat supports and takes responsibility to fix as needed.
Images built for Kubernetes tend to have at least two significant drawbacks when deploying to OpenShift:
Linux distribution -- Any Linux libraries that run in a Linux kernel will run in OpenShift, but Red Hat only offers support for libraries drawn from the Red Hat Enterprise Linux distribution.
Root user -- By default, Linux containers are built by the root user to run as the root user. Containers running as root work perfectly well in Kubernetes, but OpenShift intentionally runs each container as an arbitrary non-root user. While most containers are able to run as non-root, the best approach is to build them as non-root to run as non-root.
Advantages of Red Hat Container Certification
The Red Hat Container Certification program confirms that an image is built with best practices that enable customers to use containers with confidence, knowing that:
All components come from a trusted source and the underlying packages are not altered.
The container image is free of known vulnerabilities in the platform components or layers.
New vulnerabilities are promptly addressed through the Red Hat Build Service.
The container is compatible across Red Hat footprints —- from bare metal to cloud.
The complete stack is commercially supported by Red Hat and Red Hat partners.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.