Note: This blog post is part of a series.
Hey Appy! I’m glad you’re back. Before we get started, I wanted to share with you a poem I’ve been working on.
Ops teams were on their knees.
Ops teams were in tears.
But Kubernetes came in sight.
Kubernetes became their guide.
So, what’d you think? Why are you so interested in the ceiling all of a sudden? Hmm, I’ll take that as a sign to continue working on my poetry writing skills.
Anyway, our last conversation covered container technologies. Now, I think it’s time to dive into Kubernetes, don’t you think?
What is Kubernetes?
Where do I begin? Let’s start with the meaning of Kubernetes. Kubernetes is a Greek word that means helmsman. Now, why do we need it? As applications like you grow and the number of components increase, the difficulty of configuring, managing, and running the whole system smoothly also increases. And since humans always tried to automate difficult and repetitive tasks, Kubernetes was created.
Kubernetes is one of the many container orchestrators out there that runs and manages containers. What does a container orchestrator do? It helps the operations team to automatically monitor, scale, and reschedule containerized applications inside a cluster in the event of hardware failure. It enables containerized applications to run on any number of computer nodes as if all those nodes were a single, huge computer. That makes it a whole lot easier for both developers and operations team to develop, deploy, and manage their applications. Your parents, Dev and Ops, would surely agree with this magic.
Next is the Kubernetes architecture. A Kubernetes cluster is a bunch of master nodes and worker nodes. A master node manages worker nodes. You can have one master node or more than one if you want to provide high availability. The master nodes provide many cluster-wide system management services for the worker nodes, and the worker nodes handle our workload. However, we won’t be interacting with them a lot (not directly at least). To set your desired state of the cluster, you need to create objects using the Kubernetes API. We can use kubectl to do that, which is the command-line interface of Kubernetes.
Here, let me draw you a picture:
The master node consists of basically four things:
- etcd is a data store. The declarative model is stored in etcd as objects. For example, if we say we want five instances of a certain container, that request is stored in the data store.
- Kubernetes controller manager watches the changes requested through the API server and attempts to move the current state of the cluster towards the desired state.
- Kubernetes API server validates and configures data for the API objects, which include pods, services, replication, controllers, and more.
- Kubernetes scheduler takes charge of scheduling pods on nodes. It needs to consider a lot information, including resource requirements, hardware/software constraints, and many other things.
Each worker node has two main processes running on them:
- kubelet is something like a node manager. The master node talks to the worker nodes through kubelet. The master node tells kubelet what to do, and then kubelet tells the Pods what to do.
- kube-proxy is a network proxy that reflects Kubernetes networking services on each node. When a request comes from outside of the cluster, the kube-proxy routes that request to the specific pod needed, and the pod runs the request on the container.
Now this picture has more details, but you can see how everything works together:
We use Kubernetes API objects to describe how our cluster should be, what applications to run in it, which container images to use, and how many of them should be running.
Get to know the following basic Kubernetes objects:
- Pods, are the smallest deployable units of computing that can be created and managed in Kubernetes. Containers of an application run inside these pods.
- Pods are mortal and are created and destroyed dynamically when scaling. For the pods to communicate with each other, we need services. A service is an abstraction which defines a logical set of pods and how to access them.
- Volume is an abstraction that solves two problems. The first problem is that all the files inside a container are lost when it crashes. The kubelet restarts the container, but it is a new container with a clean state. The second problem is that two containers running in the same pod often share files.
- Namespace lets you create multiple virtual clusters (called namespaces) backed by the same physical cluster. You use them in huge clusters with many users belonging to multiple teams or multiple projects.
Now, pay attention to the controllers, which build upon the basic objects to give us more control over the cluster and provide additional capabilities:
- ReplicaSet ensures a set number of replica pods are running at any given time.
- After you define a desired state in a deployment object, the deployment controller changes the current state to the desired state at a controlled rate.
- A DaemonSet ensures that all (or some) nodes run the specified pod. When more nodes are added, pods are added to them.
- Jobs create one or more pods and after they are successfully completed, the job is marked as complete.
Okay, I know that was a lot of information. But, guess what? This is far from enough to get a full idea of Kubernetes! However, it is enough to get you off to a good start. Don’t you agree, Appy?
Hey! Are you dozing off? Wow, maybe lectures aren’t your thing. Let’s try a hands-on approach.
Lucky for you, there are several labs that can help you understand the core concepts of Kubernetes that I just described. These labs only require you to have an IBM Cloud account. You can then create a free Kubernetes cluster to play with.
I hope you tell your parents, Dev and Ops, what you have learned so far from our meeting today and from the labs after you complete them. I am sure both of them will be happy to hear about it. Don’t forget to deliver my regards as well.
Have a great adventure, Appy! I know you’re off to a good start.
A previous version of this post was published on Medium.