OpenShift 101: Introduction, architecture, and operators
Explore the layers and components of OpenShift
Red Hat OpenShift is an open source container application platform that runs on Red Hat Enterprise Linux CoreOS (RHCOS) and is built on top of Kubernetes. It takes care of integrated scaling, monitoring, logging, and metering functions. With OpenShift, you can do anything that you can do on Kubernetes and much more with OpenShift-specific features.
OpenShift includes everything you need for hybrid cloud, like a container runtime, networking, monitoring, container registry, authentication, and authorization. I explain how OpenShift can do all of that by introducing its architecture and components.
OpenShift architecture and components
To make the most of OpenShift, it helps to understand its architecture. OpenShift consists of the following layers and components, and each component has its own responsibilities:
- Infrastructure layer
- Service layer
- Main node
- Worker nodes
- Persistent storage
- Routing layer
In the infrastructure layer, you can host your applications on physical servers, virtual servers, or even on the cloud (private/public).
The service layer is responsible for defining pods and access policy. The service layer provides a permanent IP address and host name to the pods; connects applications together; and allows simple internal load balancing, distributing tasks across application components.
There are mainly two types of nodes in an OpenShift cluster: main nodes and worker nodes. Applications reside in the worker nodes. You can have multiple worker nodes in the cluster; the worker nodes are where all your coding adventures happen, and they can be virtual or physical.
The Main node is responsible for managing the cluster, and it takes care of the worker nodes. It is responsible for four main tasks:
- API and authentication: Any administration request goes through the API; these requests are SSL-encrypted and authenticated to ensure the security of the cluster.
- Data Store: Stores the state and information related to environment and application.
- Scheduler: Determines pod placements while considering current memory, CPU, and other environment utilization.
- Health/scaling: Monitors the health of pods and scales them based on CPU utilization. If a pod fails, the main node restarts it automatically. If it fails too often, it is marked as a bad pod and is not restarted for a temporary time.
As shown in the following image, the worker node is made of pods. A pod is the smallest unit that can be defined, deployed, and managed, and it can contain one or more containers. These containers include your applications and their dependencies. For example, Alex saves the code for her e-commerce platform in containers for each of the databases, front-end, user system, search engine, and so on.
Keep in mind that containers are ephemeral, so saving data in a container risks the loss of data. To prevent that, you can use persistent storage to save the database.
All containers in one pod share the same IP Address and same volume. In the same pod, you can also have a sidecar container, which can be a service mesh or for security analysis — it must be defined in the same pod sharing the same resources as other containers. Applications can be scaled horizontally, and they are wired together by services.
The registry saves your images locally in the cluster. When a new image is pushed to the registry, it notifies OpenShift and passes image information.
Persistent storage is where all of your data is saved and connected to containers. It is important to have persistent storage because containers are ephemeral, which means when they are restarted or deleted, any saved data is lost. Therefore, persistent storage prevents any loss of data and allows the use of stateful applications.
The last component is the routing layer. It provides external access to the applications in the cluster from any device. It also provides load balancing and auto-routing around unhealthy pods.
One of the major improvements on OpenShift 4 is that it is built on operators, which makes it unique. If you are new to OpenShift, you might be wondering what operators are and why are they important.
Usually, managing and maintaining a small number of containerized applications is not an issue, but at a scale, it can be a difficult task and leaves those applications vulnerable. An operator is a method of packaging, running, and maintaining Kubernetes-native applications. It extends the Kubernetes control plane and API to automate and streamline installation, updates, and management of container-based services. The entire OpenShift platform runs on operators, which means you can easily install or upgrade OpenShift itself. You can also install, manage, and update operators running on your cluster.
You can install operators from OperatorHub or customize your own using Operator SDK, which allows you to build, test, and package your operators. OperatorHub was introduced in OpenShift 4. It is a catalog of applications that can be installed by administrators and added to individual projects by developers. With OperatorHub, you can deploy integrations with IBM Cloud and Red Hat quickly. There are two types of operators that you can use from the OperatorHub: Community Operators and Certified Operators.
If you’d like to explore more hands-on with operators on OpenShift, try out the Fun with OperatorHub tutorial.
In the next blog post in this series, I will introduce the roles of developer and administrator on the web console.