2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Manage containerized apps with IBM Cloud Pak System

Regulatory or performance requirements often dictate that a significant number of applications must be deployed within an on-premises data center. If you want to manage containerized apps with Kubernetes, you must also install Kubernetes within your data center and behind the firewall too. This can be a challenge if you don’t currently have the skills to deploy on-premises Kubernetes clusters. This post offers potential solutions.

Introduction to Kubernetes and the problems it solves

Kubernetes is a powerful and flexible open-source container orchestration platform. It was developed in response to a problem created by the growing use of Docker containers to package applications, where the number of deployed containers grew beyond what could be reasonably managed with the tools available at the time. Kubernetes also helped standardize approaches to providing fault tolerance and recovery for containerized applications. This eliminated the need for every team to develop their own best practices in that area. Many software vendors now deliver their products as sets of Docker images that can be easily deployed on Kubernetes.

Deploying software and solutions on Kubernetes has another significant advantage: it can be deployed wherever you have a Kubernetes cluster at your disposal. Most public cloud vendors offer a managed Kubernetes service, such as IBM Cloud Kubernetes Service, Azure Kubernetes Service, and Google Kubernetes Engine. You can also deploy Kubernetes on premises within the controlled environment of your own data center.

Kubernetes requires a solid IaaS platform

Kubernetes is focused on the deployment and management of containerized workloads, not on the networking, storage, and compute infrastructure resources it requires. If those resources are provided in a virtualized manner, they are often referred to as the underlying infrastructure-as-a-service (IaaS). You cannot run a reliable and resilient Kubernetes platform if the underlying IaaS platform does not meet certain qualities of service. When you use a managed Kubernetes service from a public cloud provider, you rely on that cloud provider to ensure that the IaaS platform does what it’s supposed to do. However, when you deploy Kubernetes on premises within your own data center, you are responsible for the IaaS platform hosting it.

The responsibility of deploying and operating that infrastructure requires effort that should not be underestimated. You may already have a solid solution in place that could be used. If not, you should carefully consider the technical challenges and lead times for building a new IaaS platform.

It starts with using reliable hardware that is configured to avoid any single points of failure. In turn, you need a way to easily manage and operate the hardware, so that if any problems do occur, you can quickly identify them. Equally, you should closely monitor the lifecycle of your various hardware components. Storage, network, and processor components each have firmware that need to be upgraded on a regular basis. Also, you should ensure the compatibility of each particular mix of hardware and firmware. Using standardized hardware components from a single vendor can help make this possible. Teams that act as their own hardware integrator can quickly become overwhelmed by the many choices and interdependencies that need to be considered when building a hardware stack.

Integrated systems that include both hardware and software that are developed and tested as a single unit eliminate the need for you to design and test such systems yourself. Some of these systems are classified as converged infrastructure, which means that the networking, servers, storage, and virtualization tools are packaged on a prequalified turnkey device that includes a management software toolkit. Others are examples of a more recent trend call hyper-converged infrastructure (HCI), in which the IT infrastructure virtualizes all the elements of conventional “hardware-defined” systems and includes, at a minimum, virtualized computing, a virtualized storage area network (SAN), and virtualized networking.

It’s not just traditional hardware vendors that provide offerings in this area. Public cloud vendors recently started creating offerings that are designed to be situated in the client datacenter but are considered part of the public cloud.

Deploying Kubernetes requires skilled experience

Although Kubernetes 1.0 was launched in July 2015, it is a relatively new technology that is still evolving and maturing. As a result, there are a limited number of people with significant hands-on Kubernetes skills, which can hinder the implementation of on-premises Kubernetes. Activities that require intermediate to advanced skills include setting up the network environment to interconnect all the nodes in the clusters, defining the dynamic storage that will be used by the nodes that support Kubernetes, and maintaining the containers and pods that are deployed in the clusters. This is especially true if you plan to run production applications on Kubernetes. When skills are in short supply, it makes sense to deploy a commercial container application platform such as Red Hat OpenShift Container Platform. This not only simplifies set up, but also provides enterprise level support to resolve future issues.

There are also ways to multiply the effectiveness of skills that already exist within your organization, such as using deployment templates and pre-defined management operations to capture existing expertise to automate common activities. People with relatively limited Kubernetes skills can use those templates, commonly referred to as infrastructure-as-code, to create their own Kubernetes environments and ensure consistency.

Infrastructure-as-code templates automate Kubernetes deployments

Infrastructure-as-code templates automate the deployment of a set of virtual machines with software installed and configured on top. Essentially, the steps to create a Kubernetes environment are programmed into a machine-readable definition file that is treated as if it is program source code.

IBM Cloud Pak System

IBM Cloud Pak System is a prequalified combination of hardware and software. It includes the aforementioned infrastructure-as-code templates, enabling quick deployment and simplified management of Red Hat OpenShift Container Platform clusters.

Figure 1 illustrates the Red Hat OpenShift Container Platform HA template on IBM Cloud Pak System. The template shows the topology of a Red Hat OpenShift Container Platform cluster and each box represents a virtual machine with a particular function within the cluster. Once deployed, this is what will be installed and configured automatically. This particular template creates a multi-node OpenShift cluster with GlusterFS, providing high availability with three master nodes, three infrastructure nodes, and four application nodes.

diagram of Red Hat OpenShift Container Platform HA template Figure 1: Red Hat OpenShift Container Platform HA template on IBM Cloud Pak System


Even when your organization needs to host production applications on premises, it is still possible to take advantage of the benefits of containerization. To summarize, the key components you need to achieve this are:

  • A reliable IaaS platform with hardware that is configured to avoid any single points of failure.
  • People with significant hands-on Kubernetes skills or a commercial container application platform.
  • Infrastructure-as-code automation templates to speed the creation of Kubernetes environments and ensure consistency.

Learn more about how infrastructure-as-code templates on IBM Cloud Pak System work with the tutorial Accelerate your RedHat OpenShift Container Platform deployment with IBM Cloud Pak System.