Containers interoperability: How compatible is portable? – IBM Developer

Join the Digital Developer Conference: AIOps & Integration to propel your AI-powered automation skills Register for free

Containers interoperability: How compatible is portable?

Containers are not as interoperable as you would think unless multiple dimensions of containers interoperability are well understood and planned in a hybrid multicloud ecosystem. In this article, I am specifically talking about hybrid, multicloud environments where a containerized application should be interoperable across various landing zones as a requirement.

If you develop or maintain containerized applications and have experience running container images, you might believe that anything will just work, all of the time. Containers are often thought to be completely portable across time and space. A lot of the time, they do work! That is true only until they don’t. The reasons why are the 7 dimensions that I share here.

However, before I get into the technical aspects, I would like to clarify the concepts with a simple philosophy.

The Philosophy

If you are not a containers expert, the following description will help you understand portability, compatibility, and supportability in the given context. Look at the shapes in the following diagram.

Diagram of portability, compatibility, and supportability represented by different combinations of shapes

There is a circle, a square with curved edges, a rectangle, and a triangle. Portability can be defined when you try to fit any one shape into another shape. For example, by fitting the square into the circle, or the circle into the square, and so on. Compatibility occurs only when the same shape is fit into the same shape. For example, by fitting a small circle inside a big circle, a square into a square, and so on. However, if you try to make a triangle compatible with a circle, you must make certain fundamental changes to the triangle to transform it into a circle and then you can fit it into a bigger circle. This is called supportability. Either you do it yourself or a vendor brings the relevant skills to make it happen. The same philosophy applies to containers.

The first 3 dimensions

Let’s examine the first 3 dimensions of portability, compatibility, and supportability more closely.

Diagram of the 3 dimensions of portability, compatibility, and supportability

Portability refers to an OCI standard governs that a container image can be consumed by almost any container engine including CRI-O, Docker, rkt, containerd, and other Podman instances. They govern the following:

  • Runtime specification (runtime-spec) runs a filesystem bundle, unpacks on disk, downloads an image, unpacks that image into runtime filesystem bundle, and such.
  • Image specification (image-spec) is the image format, specifically the information to launch the application (for example, the command, arguments, and environment variables). It defines how to create the image, image manifest, metadata about the contents, dependencies of the image, and image configuration, which includes information such as application arguments and environments.

Compatibility addresses the content inside the container image. Containers do not offer a compatibility guarantee. This compatibility extends to the processor architecture and versions of the operating system. Here are some examples:

  • Try running a RHEL 8 container image on a RHEL 4 container host.
  • Try to execute a Windows container image on a Fedora container host.
  • Try running an x86 container image binaries on a POWER container host.

In all of these situations, you will encounter problems even though the image is portable. That is called a compatibility. Between Linux distributions and even between versions of the same Linux distribution, there can be compatibility problems.

Supportability is what vendors can provide. This is about investing in testing, patching, security, performance, and architecture, as well as ensuring that images and binaries are built in a way that they run correctly on a given set of container hosts (such as processor, operating system, and kernel version).

Although, no vendor can guarantee that every permutation of a Linux container image and host combination on the planet will work at a given point in time and space, for example. It would expand the testing and analysis matrix at a non-linear growth rate.

Now that these concepts are clear. Let us dig much deeper into the technology.

The 7 dimensions

Diagram of the 7 dimensions of portability, compatibility, supportability, container image, container host, container orchestration, and registry server

In addition to the 3 dimensions that I explained previously, the other dimensions to be considered for interoperability of containerized applications and images are container image, container host, container orchestration, and registry server. While a registry server may not contribute directly to the interoperability, having an integrated registry makes it easier to some extent.

These are the components that any developer or application owner should think about for production-readiness with respect to containers. At least, in the given context of portability, compatibility, and supportability depending upon the landing zone or target environment. This becomes more crucial when you plan to move or migrate your containerized applications from one environment to another.

Container image

Base images are built from the same utilities and libraries included in an operating system.

Advanced application images already include extra software. Examples include language runtimes, databases, middleware, storage, and independent software vendor (ISV) software. Software included in these images is tested, preconfigured, and certified to work immediately. These prebuilt images make it easy to deploy multitier applications from existing components without the work of building your own application images.

Linux containers are often described as portable, but portability does not guarantee compatibility. For example, people often believe that portability means that you can run a Linux container image on any container host built from any self-made Linux (such as my own Linux) or non-Linux distribution, but this is not technically accurate. Linux container images are collections of files, including libraries and binaries, which are specific to the hardware architecture and operating system (OS). When the container image is run, the binaries inside the image run just as they would on a normal Linux operating system. There must be compatibility between the container image and container host. For example, you cannot run 64-bit binaries on a 32-bit host. Nor can you run ARM containers on x86_64 hosts. The same operating system rules apply.

Problems can occur if the container host and container image version are mismatched. The larger the version mismatch between a user space and the kernel, the more likely there will be incompatibilities.

Diagram of a container image that is comprised of an application and OS dependencies, and a container host that contains a kernel space

Container images are made up of layers, such as libraries, binaries, packages, dependency management, repositories, image layer, and tags.

The portability of Linux containers is not absolute. Using different operating system distributions or widely differing versions of the same Linux distribution can lead to problems.

There are a few best practices in hybrid multicloud environments (multiple landing zones) as follows:

  1. All container images within a cluster should be based on the same base image where possible. The versions of the programs and libraries (user space) in the container image should be compatible with the container host.
  2. All of the container hosts in the cluster should have compatible hardware, kernels, and libraries, preferably identical.
  3. Preferably, all of the container hosts should be configured identically as much as possible, especially network and storage. All configuration and access to shared storage (NFS, iSCSI, and similar), local storage (overlay2 or device mapper) time servers, container runtime, kernel parameters, and such should be identical. Different configurations across multiple clusters in a hybrid multicloud environment may lead to issues when you want to move a containerized application or an operator from one environment to another. Any deviations in configurations may require additional work related to migration scenario.

Container host

There are different types of container engines with runc as a runtime, such as Podman, CRI-O, and dockerd, as shown in the following diagram.

Diagram of different types of container engines

Containers do not run on CRO-I or Docker, for example. Containers are processes; they run on the Linux kernel (Linux example). Containers are Linux processes (or Windows).

The Docker daemon or CRI-O, for example, is one of the many user space tools and libraries that talk to the kernel to set up containers. They do the following at least:

  • Provide an API prepare data and metadata for runc.
  • Pull image, decompose, and prepare storage.
  • Prepare configuration; passes to runc.
  • Copy on write and bind mounts (storage).

Further, the kernel or OS at least creates Linux processes.

Normal processes are created, destroyed, and managed with system calls:

  • Fork()
  • Exec()
  • Exit()
  • Kill()
  • Open()
  • Close()
  • System()

Container orchestration

Containers are not of much benefit if they are not orchestrated. They work with orchestration software to provide several advantages. It can perform some of the following tasks:

  • Scheduling. Distributed systems computing. Resolving where to put containers in the cluster and allowing users to connect to them.
  • Provide an API. Can be consumed by users or robots.
  • Defining the desired state. Fault tolerance must be designed into the system.

What else does Container orchestration and Kubernetes need? This is all about Kubernetes. It needs the following:

  • A standard way for the kubelet to communicate with the Container Engine. Container Runtime Interface (CRI) is the protocol between the Kubelet and Engine.
  • A daemon which communicates with CRI. gRPC server is a daemon or shim, which implements this server specification.
  • A standard way for humans to interface with the gRPC server to troubleshoot and debug. cri-ctl is a node-based CLI tool that can list images, view running containers, and so on.

If you have multiple clusters in a hybrid multicloud landscape, it is useful to have a common Kubernetes version across them. Remember that every new version (mostly) deprecates some APIs and it may impact your pre-existing deployments, objects, services, and so on depending upon the Kubernetes version if you want to move your containerized application from one place to another.

Registry server

The registry server defines a standard way to find images, run images, build new images, share images, pull images, introspect images, shell into running container, and so on. Remember that images age similar to cheese and not wine.

As mentioned earlier, a registry server may not contribute directly to the interoperability, but having an integrated, or common, registry makes it easier to some extent. This component follows certain key principles, such as trust is temporal, even good images go bad over time because the world changes around them, and images must be constantly rebuilt to maintain a score of health index A (A to F depends upon security vulnerabilities affecting the number of packages). It also provides a REST API, downloads a trusted thing, downloads from a trusted source, and map layers, such as a local cache maps each layer to a volume or filesystem layer. For example, filesystem and container engine drivers, device mapper volumes, and container engine drivers.

A common registry can be a good use case if common images must be pushed across a hybrid multicloud environment.

Business scenarios

All the previous factors I mentioned should become part of your careful planning and analysis in a hybrid multicloud environment (for the same containerized application, keep movement in mind from one landing zone to another). Now, let us look at a scenario for Linux container images (L-CI) as shown in the following diagram:

Diagram of three sets of Linux container images, which are running three different operating systems (Ubuntu, My Own Linux, and SUSE and Ubuntu), attached to three different cloud platforms

Each cloud platform in the diagram (Cloud x, Cloud y, and Cloud z) has its own container host OS and may be a different container orchestration version. With any potential movement (from or to) in mind, two questions must be considered:

  1. Will my containerized business applications get guaranteed compatibility and supportability by the vendor or cloud provider? (Note that it is not about portability in silo.)
  2. Am I willing to take related risks in production?

Now consider the following scenario of L-CI and Red Hat Enterprise Linux (RHEL) container images.

Diagram of six sets of container images

In the diagram, each landing zone or target environment, such as AWS, IBM Cloud, Microsoft Azure, Google Cloud, private cloud, and VMware Cloud have a similar Kubernetes platform, which is Red Hat OpenShift Container Platform with a common certified container orchestration (Kubernetes) version, common container host (RHEL), and common registry (integrated). Thus, the same container images and applications will work across them if you plan to move or migrate later, or plan for a hybrid multicloud containerized solution. Any movement (from or to) guarantees portability, compatibility, and supportability. Note that OpenShift can run on more landing zones and target environments.

Linux and Windows containers

You need different container hosts for different container image types. As the following diagram shows, Linux containers need a Linux container host OS, and Windows containers need Windows container host OS. As a common platform, OpenShift can support both types of container images and hosts. However, container images must be specific to run on a specific container host. An interesting fact is that these images can be portable across different types of hosts, but they will not be compatible there. Hence, they will not work. For example, a Windows container image may be portable on a Linux host, but it will not be compatible to run.

Diagram of multiple RHEL and Windows container images running on OpenShift, with two separate container hosts underneath: RHEL and CoreOS, and Windows)

As a developer, you would use different base images for different container hosts. However, a common container platform that supports these different container hosts (worker or compute nodes) can simply be used. For example, OpenShift.

This is why I say “build once and deploy anywhere” with a true hybrid multicloud container strategy when OpenShift is your container platform.

Summary

Some basic web applications are the simple use cases in the containerized world and may not require the burden of all of the aspects explained in this article. However, when you have complex workloads or applications types, particularly in hybrid multicloud environments, at least remember to consider the following factors: container images, container host OS, container host hardware (such as x86), containers host OS version, and Kubernetes version.

To learn more about containers and container orchestration tools, visit the get started with containers resources page and read the OpenShift 101 series.