Digital Developer Conference: Hybrid Cloud 2021. On Sep 21, gain free hybrid cloud skills from experts and partners. Register now

Containers and microservices — a perfect pair

In Part 1 of this series, I talked about what exactly microservices are and how they differ from traditionally built systems (monoliths). This second article in the series is about the power of containers — how they are revolutionizing software development and powering microservices to shift an entire industry.

In this article, I’ll touch on three concepts that are critical for you to focus on when adopting container-based infrastructure for your microservice-based applications:

  • Logging and monitoring
  • Zero-downtime continuous delivery
  • Dynamic service registries

I’ll start with an overview of containers, container managers, and how containers relate to microservices.

Containers and microservices really are a perfect pair

Unless you’re completely new to cloud technology and cloud-native application development, you’ve probably heard of Linux containers and the container-based projects that have caught fire over the past couple of years. But in case you haven’t, think of Linux containers as lightweight virtual machines that can be used more flexibly, integrated more rapidly, and distributed much more easily. One of the projects leading this charge is Docker. Since its launch in 2012, the Docker team (and now company) has provided a very simple way to build, package, and distribute cloud-native applications via Linux containers.

How do containers differ from virtual machines? A virtual machine runs its own guest operating system instance and contains its own libraries and binaries. Containers are isolated, sharing the underlying host OS and libraries, while packaging only the necessary application binaries.

Containers run as a minimal set of resources on top of Linux systems, and often the packaged application is no more than a few hundred megabytes. Virtual machine-based applications are often at least three to four orders of magnitude larger (tens of gigabytes). You can easily see how containers fit into the microservice paradigm, being smaller and faster— two of the microservice tenets from Part 1.

Many industry leaders are moving to container-based infrastructures, both in the cloud and on-premises, for extreme gains. One of the key gains is that Docker and other similar Linux container technologies are easy to integrate into continuous-integration and continuous-delivery pipelines. On average, Docker users ship software seven times more frequently, according to a recent self-funded Docker study. Companies like Gilt Groupe have embraced microservices and containerized infrastructure, shipping software sometimes as often as 100 times a day. The ability to push code changes quickly, automatically rebuild Docker images that are of minimal size, and manage a large number of these deployed images from a common code base results in impressive speed through a company’s delivery pipeline.

One of the other benefits of Docker containers is the portability of these packaged applications, called Docker images. Docker images can be moved seamlessly across environments and through build pipelines. For example, BBC News (a division of the British Broadcasting Corp.) says that its continuous-integration jobs run more than 60 percent faster in a Docker-based infrastructure.

The ability to move the same code throughout the delivery pipeline — minimizing the need for software configuration at each stage and having predictable hardware resource requirements along the way — speeds applications through development, test, and production faster than ever before. Companies are able to see these gains in efficiency because their system components are modularized inside each Docker image. You don’t need to configure the software each time you need it. You simply start a container instance, and it’s ready to go.

Docker is a shipping container system for code that makes software development and delivery through Linux containers easy. Docker acts as an engine that enables any payload to be encapsulated as a lightweight, portable self-sufficient container. Such containers can be manipulated using standard operations and run consistently on virtually any hardware platform.

If you’re new to containers and Docker, review the Get started with containers content. If you’re experienced with Docker and want to get hands-on with Docker in the cloud, the IBM Cloud Kubernetes Service is an enterprise-grade container service that you can get started with today. You can have your application running in Docker containers faster than ever.

Faster and smaller: Containers as the nanobots of software development

You can begin to see why containers are so important to microservices — they are one of the key enabling technologies for the architectural style. You can also begin to see that the management of containers is equally important.

As you know from Part 1, instead of scaling up, we scale out in microservices. Instead of adding more RAM to a microservice runtime, we simply get another microservice runtime of the same kind. Need even more RAM? Get a third instance. This approach is fine for only a couple services with one container instance each, but as anyone with computer skills and an extended family knows, it can get out of hand quickly when you’re remotely managing dozens of servers.

Think about how quickly you will need to manage more than 100 individual instances. If you start out with a handful of microservices that make up your app — five or six, say — each of those should have at least three container instances supporting each microservice. So right off the bat, you’re at 18 container instances. If you add another microservice or your app is really successful, and you need to scale to five to 10 container instances for certain services. You’re easily approaching more than 100 container instances to manage — on a good day.

Thankfully, many open source projects can handle this exact need. For example, Kubernetes, Red Hat OpenShift, Apache Mesos, and many more container orchestration options from Cloud providers make it easy to manage thousands of container instances from a single console or command line, using an infrastructure-based domain-specific language.

All of the containers that we deploy through our continuous-integration/continuous-delivery processes are immutable. Once they are deployed, you can’t change them. Instead, if you need a change or an update, you spin up a new cluster of containers with the correct updates applied and tear down the old ones. Containers make it possible to integrate your delivery pipelines and image registries to quickly and easily manage all phases of your infrastructure.

As an aside, it should be noted that although certain container services still use virtual machines as Docker hosts, these should still be regarded as disposable and not long-lived. These VMs are a bit more robust in resources and integrated management capabilities, but the lifecycle of these VMs is dynamically managed by the needs of the container-based workloads.

The meshing of microservices

So far, I’ve talked about why more containers are better and how to handle a generic infrastructure at scale. Now that you’re comfortable with the concept of developing for containers, you need to start thinking about developing your applications and putting them into production.

This brings me to the three key elements that are crucial to microservice-based application development, which are supported by service meshes:

  • Logging and monitoring
  • Zero-downtime continuous delivery
  • Dynamic service registries

You want to think about each of these capabilities from day one, but without necessarily solving for them immediately.

As I discuss these capabilities, you’ll also see why an integrated service mesh built for microservices — such as Istio, Red Hat Service Mesh, and others — makes the management of your microservices architectures that much easier. A service mesh brings explicit control to the way you connect, manage, and observe microservices in a very implicit way by providing low-overhead proxy containers that are automatically deployed with your microservice containers to handle all the necessary dirty work for seamless and secure integration.

Logging and monitoring

If you provide production-level support for applications and services, your first question should always be, “What do I do when something goes wrong?” Notice that there’s not even a hint of an “if” in that question. Components will fail, versions will change, third-party services will have outages. How can you maintain a level of sanity, along with a desired level of uptime for your users? That’s question one.

As I said earlier, you want your containers to be immutable. For this reason, most IT organizations don’t provide system-level access to container instances — no SSH, no console, no nothing. So how are you supposed to know what’s going on inside a black box that you can’t change? That’s question two.

Thankfully, this question has been solved with the concept of an ELK stack—Elasticsearch, LogStash, and Kibana. These three separate components provide the ability to aggregate logs, search free form through aggregated logs, and create and share dashboards based on logs and monitored activity across the platform. This is a great capability — much better than logging into individual computers and running a sysadmin toolbox of sed, grep, and awk. You have full-featured access to a central repository of all of your logs. You can correlate events across systems and microservices because you’ll usually see events and IDs traveling through your system and encountering similar issues.

You can integrate with ELK stacks in many ways, whether you’re running in the cloud or on-premises: via hosted services, open-source variants, and often options built into platform-as-a-service offerings like the IBM Cloud Kubernetes Service, for example. Inside of the container runtime on IBM Cloud, you have access to a full-featured, multitenant ELK stack that automatically receives your logs from Docker container-based runtimes, giving you visibility and searchability into those runtime events while also giving you a preconfigured Kibana-based dashboard out of the box. If you’re looking to get started with containers, this is one key capability that makes IBM Cloud a preeminent choice for your container-based microservices.

Zero-downtime continuous delivery

Now that you’re comfortable that you’ll have an idea of what to do when the sky starts falling (of course you hope it never does), you can move on to rapidly deploying all of your amazing application updates. Thinking about some of the larger companies that are deploying applications dozens, hundreds, or thousands of times a week, you have to wonder about downtime. Surely those companies aren’t having application outages every time they push a new version — if they did, they’d never be up.

Those companies have come to master zero-downtime deployment. This means your application is always available, no matter what, even with abundant updates, because website outages caused by planned downtime aren’t good for you or your users.

Avoiding planned downtime with the Enterprise monoliths of yesterday was time-consuming, expensive, and exhausting for everyone involved. Bringing new monolith versions up and old versions down quickly became untenable, even with the slightest bit of growth or change.

With microservices and container-based applications, these worries are minimized, if not completely removed, especially with some key services available on IBM Cloud today. By breaking your components down into much smaller pieces, you can deploy them with minimal impact to the overall system, while keeping more of them up at certain times to prevent outages. Smaller teams can manage more applications more efficiently because each version deployed has automatic oversight of maintaining its uptime, which costs less in both human attention and compute resources.

Dynamic service registries

Dynamic service registries includes concepts that you might already know of as service discovery or service proxy. These two concepts are not the same by any means, but they’re close enough to cover under this one approach.

Now that you’ll be creating thousands of container instances that back your microservices across all of your applications, how will the other components in your application understand what’s going on? How will they know what other microservices they have available to make service calls to? How will they respond to service calls being made to them?

The difference between service discovery and service proxy comes down to whether you want to do the search yourself or let someone else handle it for you. As an analogy, suppose you need a shipping service. Do you want to look up services from a single provider (the US Postal Service (USPS), for example) or from a multiservice clearing house (such as Staples).

For example, if I want to ship a package, I can go to the USPS website, enter a couple parameters for the kind of post office I am looking for, get back a list of options, pick one, and go there to ship my package. This is the concept of service discovery: I use a well-known registry of available service endpoints that’s updated when services come online and go offline. Through REST APIs, calling applications can query service-discovery services to determine which and how many types of services are available for a specific service call, down to a specific version of that requested service.

If I don’t want to be the one to choose where I go to ship my package — I just want to say, “Get this package to what it says on the address label” — I can take it to a Staples. Staples then chooses the most cost-effective shipping option for my package based on all the information I’ve provided. I’m taking the package to a well-known service provider and letting it handle the routing of my package for me. This is the concept of service proxying: You make a call to a well-known endpoint, and that call is automatically forwarded, based on pre-established rules or metadata, to a backing service that provides the actual response to the service request.

There are good arguments for preferring service discovery over service proxy and vice versa, but it really comes down to preference or implementation experience and requirements. Several open source service-discovery offerings, such as etcd and Consul, provide distributed service registries with which you can register, tag, and heartbeat all of your available service instances. There are even cool projects like Registrator that will automatically register your services to one of these endpoints as soon as the Docker container is created.

Some of the more popular service-proxy projects support the key argument that, in the long run, the service-proxy method requires fewer network hops to get to your eventual backing service. One of the largest and most popular service-proxy projects is the Netflix Hystrix project. Hystrix is a library based on Java technology that goes above and beyond simple service proxying but provides a number of quality-of-service improvements in a service-proxy library.

Obviously, both of these patterns are important in automagically managing your service instances and making them part of your available microservices infrastructure. Imagine registering all your container instances automatically, whether you’re spinning them up manually, pushing them through the Delivery Pipeline of IBM Cloud, or integrating with an on-premises pipeline like Jenkins.

Conclusion

You’ve seen here how containers are accelerating the push to cloud-native application development. Smaller, faster application components can be delivered faster and more easily than ever, with more built-in management capability than ever. Deploying containers on a managed service, such as IBM Cloud Kubernetes Service, along with all the supporting microservice capabilities — including logging and monitoring, active deploy, and service discovery — makes it easy to take that next step away from your existing monoliths and into the cloud.

We are moving toward smarter, modular systems that are more automated and more integrated from start to finish. These will evolve at their own pace, self-aware of what is available in the infrastructure, make it known what is not, and make it known when something needs to change. All of these capabilities and more will be covered in some of the upcoming series installments in this series.