The Blog

 

Appy, it’s nice to meet you. I’ve been hearing some really great things about you. I understand you’re a bright young application with a lot of potential, and you’re well looked after by both your parents*. I’ve worked with them many times before, both Dev (also known as “the developer”), who dresses you up in the nicest features and interfaces, and Ops (a.k.a., “the operations team”) who’s a bit protective of you and doesn’t want you running out in the cold, cruel world without close monitoring.

Now, I’m sure you understand that both of your parents care about you. Dev just wants you to be your best self, to look good and get along with others. And Ops wants to keep a close eye on you because, like any parent, he doesn’t want you to suffer a crash that might result in you bleeding errors. Ops works hard to maintain an environment for you that is secure, robust, reliable, and resilient so that you can run 24/7.

Scaling: Growing up is hard

Your parents want the same things but they don’t always see eye to eye. Dev and Ops work in different environments, and they’re used to following their own processes to achieve their own goals. And of course, neither of them want you to have to call you back home to help you deal with an issue that might affect other aspects of your life. You might not know this, but a small problem in one area can sometimes mean you have to start all over again with a full redeployment. And that’s not good.

So you’re getting older and becoming more stable — that’s great! As time goes by, you’re getting more well known. Your parents are doing a great job. But, uh oh, as you get more popular, you’re having more interactions, and there’s more and more traffic to the server where you live and run. What will the family do?

Your parents might need to move to a bigger home: they either need to scale the servers vertically — that is, scale up — by adding more CPUs, memory, and other server components; or perhaps they can just grow horizontally — scale out — by running multiple replicas of you. Well, you know that scaling vertically requires a lot of money, right? It doesn’t mean that you have to change any of your nice little features, but there’s a limit to how much your parents can scale up your servers.

So, ultimately, they decide to scale out instead. But now your servers are straining from the traffic, and they’re practically shaking from the ever-increasing loads. Dev and Ops decide that they’ll have to change your code, but it’s nearly impossible to scale out some of your components. There’s a lot of talk about “relational databases” and “sequential algorithms” and other important-sounding topics, but you still don’t really understand what the problem is. You just want to run.

And then they tell you that there might be a problem with you. If any of your components are unscalable, you as a whole become unscalable. They don’t want to do it, but they have a last resort: break you into pieces. Wait — what???

Some new friends — microservices!

But just when things are sounding grim, you start hearing about this cool new way to keep it together. And by “it,” I mean you. You’re hearing about something called microservices, and it sounds so cool, it’s hot! Your parents are pumped up, fist-pumping and high-fiving and being embarrassing the way only parents can. Anyway, they explain to you what microservices are, and that they will start splitting you into smaller independent deployable components that can communicate with each other through APIs. And you think, okaaaay, but they make it sound like it’s going to be painless. Or at least less painful than the alternative.

But your eyebrows are furrowed — there’s something just a bit confusing here. From what they’re describing, you’re about to become … the new Jenga! You remember good ol’ Jenga, right? Where you have to take out or replace a block with a new one without the rest of the blocks falling down? Each block is like one of your components, and more blocks — microservices — can be added to you at will. It makes sense now. Your eyes are wide and you nod — you get the microservices hype. Scaling out or up is no longer an issue. You can scale out the components that allow it, and scale up the rest if needed.

alt

For a while all is good, all is well. Your small components  — your  microservices —  are working together as a team and communicating just fine; a time goes by you grow up, and scale up and out. But you’re growing fast — almost too fast. and your microservices are increasing. Ops is finding you and all your microservices hard to control. Just imagine what’ll happen when a server fails.

Containers to the rescue

So, Appy, what’s to be done? Well, I’ve got just the right solution for you: Kubernetes. Kubernetes can assist Ops by automatically monitoring and rescheduling your microservices in case a server fails. Not only that, but it will enable Dev to deploy your microservices whenever necessary while requiring little to no assistance from Ops.

This new environment simply abstracts away the actual hardware you were running on. How? Well, by using Linux container technologies to provide complete isolation for your microservices. But we’re getting ahead of ourselves, Appy. You need to learn a little more about containers first.

Because a fully grown-up application like you has different components or services, you need different — and possibly conflicting — library versions for each component. When you were a small app with just a few components, it was completely fine to dedicate a virtual machine (VM) to each component and isolate it from other components to avoid any dependency conflict.

But now that your components are smaller and more numerous, it doesn’t make sense to run each of them on a separate VM; it’s like buying a new fish tank for each of your goldfish — not an efficient use of resources. It’s also a waste of time and effort for Ops to configure and manage each VM.

So what do we do? Right! We use Linux containers. They enable multiple services to run on the same machine while isolating them from each other like VMs do, but with much less overhead. Containers also expose a different environment to each of services.

Want to hear more about container advantages? If we run four VMs on a single machine, we have four operating systems running separately and sharing the resources of that machine. Under those VMs, there is the machine’s OS and hypervisor, which divides the resources between the four VMs. But with containers, there is only the host machine’s OS. Again, much less overhead. Don’t you think so?

More to come

So, how is container isolation possible? Well, let me tell you. Wait, what? You say it’s your bedtime? I guess it is. I am getting tired too. Why don’t we talk in depth about containers and Kubernetes next time? And for an example, we can also talk about how IBM Cloud Kubernetes Service can help with your security, deployment, operation, scaling, and monitoring of your containerized microservices. I’m sure your parents will be delighted to know that you learned so much!

Note: I can hear some of the microservices pros tut-tutting and shaking their heads about all the omitted information. I didn’t want to scare little Appy here by going too deeply into the topic right from the get-go. Think of it as a lesson in moderation for the kid. Cheers!

* This post presents the story of a fictional family. The analogy that I use in this post and throughout this series is designed to improve the storytelling, explain concepts, and boost understanding. It is not meant as an endorsement of any specific family type or gender roles.


A previous version of this post was published on Medium.