Learn more >
by Erik Kappelman | Updated April 4, 2018 - Published April 3, 2018
If you are following the progression of container-based development and deployment—and I hope you are—then you know that these developments have already fundamentally changed how applications are created and served, and will continue to do so. This article explores containers, container orchestration, Kubernetes, the hybrid cloud, and the tools that make all of this work.
To better understand Kubernetes and the hybrid cloud, we need to start by focusing on what containers are and how they fit into the broader picture. A container is a virtual system environment that can be small or large. Containerization or operating-system-level virtualization is software that allows for an operating system to create multiple user environments, or containers, that can be unique or redundant, and are kept separate from one another. These user-environments are autonomous and unaware of one another. To the software that runs inside the container, the container is just another machine. You may think this sounds like a virtual machine, and you wouldn’t be wrong.
A container is often described as a sort of virtual machine. There is some truth to this, and it is a good cursory description. However, there are important differences between containers and virtual machines.
If we can now split an application into as many little parts as we like, and then distribute that application among as many different servers and clouds as we like—all of which can be public or private—can that still be considered a client-server-network framework?“
An instance of a virtual machine is created by a hypervisor, which is a piece of software that links a guest operating system to the hardware infrastructure of the machine it is being run on. This allows for the use of operating systems that would not otherwise work with the hardware infrastructure of a given computer. For instance, a hypervisor allows a Windows machine to run macOS, and vice versa. This can be advantageous for many reasons, such as expanding the usefulness of existing hardware or, more commonly, as a safe development environment, commonly referred to as a sandbox.
Containers, on the other hand, are not intended, or even capable, of running operating systems other than that used by the host machine. (There is a workaround that allows you to use containers with different operating systems, but that is beside the point.) Okay, you now know what containers are, but why are they useful again? Well, containers are primarily used to allocate resources and enhance security.
Containers enhance security because they are partitioned from one another, and the rest of the operating system. In a traditional client-server relationship, the server controls what the client can and cannot access, and what happens when the client accesses, or tries to access, different parts of the server. The server has control, but the client or program being executed can still interact with any part of the server system if it can convince the server to allow it to do so. Clearly, this can be problematic, and has, in a variety of ways, contributed to a long list of internet security breaches.
Container software enables the creation of a system in which the clients interact with the server in a unique, partitioned, and totally controlled environment. This keeps clients from interfering with the server and with one another, and it is especially helpful in the age of cloud computing. Prior to the advent of containers, if you were running a web server in a cloud setting, the cloud provider would likely have very large servers that used a hypervisor allowing for one or more guest operating systems (your apps) to run. One of the problems with this setup is that someone else’s security flaws can become your security flaws. That means your app may be locked down and totally safe, but another app may be unsecured, low-hanging fruit for your friendly neighborhood hacker. If that app is compromised, a skilled intruder could actually gain access to the hypervisor—and all applications running on the server would be compromised.
The other primary benefit of containers is their ability to allocate resources effectively. This is true for both individuals and teams using cloud technologies like PaaS. Containers also help the companies that provide these cloud technologies. For example, if you were using a cloud service to host an app you created that allows individuals to write code in a pseudo-production environment, each user could have their own dedicated container that would give them as many or as few resources as they desired. For the cloud service provider, they could use containers to replicate and back up your app on their server to ensure that if something fails there it wouldn’t result in a break in service. They could also easily vary the volume of resources each user could access. So, containers effectively manage resources, often times at multiple levels of the deployment-hosting chain.
In our example of the cloud service provider, it is easy to see how using containers to hold each different user’s applications, or different parts of a user’s application, could quickly become enormously complicated. This is not a job for a human being and should be automated. In general, container orchestration is the automation of how, when, and where containers are created and used within a server. The different things a container orchestration system might do include: connecting hosts to containers in an advantageous way, initiating and destroying containers as necessary, and linking containers together when appropriate.
Part of the power of containers is their one-off nature, meaning an app can be made up of hundreds or thousands of containers. This is another important distinction between a virtual machine and a container. A machine instance on a hypervisor contains an entire operating system. Containers can’t run an entire operating system; they simply run the parts of the operating system that are needed to perform their task. In one common application of containers, each container performs small, distinct, and useful actions or services. This is referred to as a microservice architecture—each container performs a very small, or micro, service. If this isn’t orchestrated correctly and efficiently, the power of using containers in this way is lost. Without effective container orchestration, there is hardly a reason to use containers at all.
Kubernetes (which is Greek for “boat pilot”) is a container orchestration framework that is full of features and widely used. Let’s take a look at the specific elements of Kubernetes. In the most basic sense, the Kubernetes framework (and any container orchestration tool, for that matter) consists of clusters and pods. A cluster is a group of one or more physical servers or virtual machines on which the containers are actually run. These machines are referred to as nodes. Clusters are resources that need to be allocated efficiently. Each cluster has a master that is in charge of everything that takes place in that cluster, such as scheduling and maintaining different application states. The job of the master is to make sure that the cluster is running efficiently, and oftentimes redundantly in case of failure. Clusters are run using deployments. Deployments are essentially instruction manuals for the creation of pods and are used by the master in a cluster. Pods are where an application or set of applications consisting of containers exist. Containers in pods can easily communicate and share information, but it depends on the needs of the pod. Containers can also communicate with other areas of the pod such as a database. Pods are very useful when applications share data sources or services, and can be lumped into a pod together so the applications can utilize shared resources, which increase efficiency. Deployments tell the master how many replications of a pod are desired, which containers are in the pod, and information about these containers, among other things. Services connect this whole ball of wax to the outside world, specifying which pods are available and where they are available from (which port). This is what makes up the basic Kubernetes framework.
One of the drawbacks to Kubernetes is that it is difficult to create a homespun deployment framework because there are so many operational challenges. So cloud service providers now incorporate Kubernetes into their services. For instance, IBM recently added Kubernetes to its cloud offerings. Using the IBM Cloud Container Service, you can now deploy your apps into a Kubernetes environment with relative ease.
Using IBM’s online interface, a cluster can be created and applications can be deployed to this cluster using Helm charts. Helm charts are blueprints for Kubernetes-based software that make it easier to deploy, upgrade, and manage Kubernetes-based applications. Once they are created and deployed, these applications can then be managed from the Kubernetes dashboard, which is where you get access to all of the advanced functionality Kubernetes has to offer.
Kubernetes is one of the tools being leveraged in the creation of IBM’s new hybrid cloud offering. But hold on, what’s a hybrid cloud? To put it simply, a hybrid cloud combines private cloud structures with public cloud structures in order to maximize efficiency.
Let’s try to envision a simple private/public cloud setup that would qualify as a hybrid cloud. Your company, ACME Widgets, has traditionally used in-house servers to offer its online services. You want to move to cloud deployment for myriad reasons that will help your business, but you don’t want to move everything out of house as some of the data kept on your internal servers is sensitive, or maybe you have specialized hardware for specific processes that are better served in house. There is nothing new about this arrangement, and companies have had public and private clouds before—but what Kubernetes does is it allows these clouds to actually work together while remaining completely separate. This is known as the hybrid cloud. Previously, these clouds would have been separate entities with no preformed tools available to help them work together. The Kubernetes Federation tool makes the difference.
Federation is the tool that’s used to manage multiple clusters. These clusters can be located in the cloud or on an in-house server. All that needs to be done is each cluster needs to be set up and then registered with the Federation API server. Federation can then help system administrators optimize the use of their available resources and keep them secure. If you have multiple data centers, you might want to send customers to whichever server has the lowest latency at that time. Federation makes this easy. For example, let’s say 90% of an online service you provide can safely live in the cloud, but 10% needs to be kept on your in-house servers. Federation allows systems like this to easily form.
Where this leads is really anybody’s guess, but I will give you a few predictions. I believe we are moving toward some kind of internet singularity. (I know people have been saying this forever, but it will eventually happen.) By this I mean there has been a great deal of advancement in the ways that technology service providers can build, deploy, and manage applications and software, despite the fact that the underlying physical structure is still based on a the client-network-server framework. It seems like we are on the cusp of something really different and have been for a while. I believe the hybrid cloud is the beginning of this process. If we can now split an application into as many little parts as we like, and then distribute that application among as many different servers and clouds as we like—all of which can be public or private—can that still be considered a client-server-network framework? The underlying infrastructure is still essentially the same as it always has been, but we are beginning to use this structure in a manner so far removed from the way it was originally used that it is almost as if the structure itself has changed.
I believe these changes may lead to a fundamental reorganization of the internet’s physical infrastructure away from the client-network-server paradigm toward a paradigm that better supports the tools and methods that are now being used in development. In practical terms, this should all lead to a faster and more secure sharing of data across the internet. That means more opportunities for businesses and individuals to use the internet to enrich themselves intellectually, artistically, and economically. Whatever happens, containers, container orchestration software, and the hybrid cloud are here to stay … at least until we come up with something better.
This learning path is comprised of basic to advanced Kubernetes skills.
May 20, 2019
Back to top