Win $20,000. Help build the future of education. Answer the Call for Code. Learn more

Modernizing your applications with containers and microservices

Back in 2015, I started a microservices article series that used the metaphor of “smaller, faster, stronger” to conceptualize why the concepts of microservices and containers were seeing such a boom of adoption and being discussed across the entire technology landscape. Since that time, all of those reasons have only become larger and more ingrained in the daily lives of cloud native developers and application modernization architects alike.

We’re not going to dive into the depths of what makes microservices great to begin with again, as much of that has not changed since my original article. What we are going to cover in this follow-up article is some of the technology that has evolved in the years since to make it that much easier to implement microservices-based applications.

Shifting Landscapes

By and large, Kubernetes has won the day when it comes to next-generation, cloud native, platform-as-a-service (PaaS) capabilities. Kubernetes is now the de facto containerization platform for microservices and modern-day application development, with every major cloud provider offering Kubernetes:

Red Hat OpenShift is also available as a managed service offering:

Other container orchestration options still exist, but they all tend to serve more specific edge-cases or niche uses:

With the introduction of the Operator Framework by Red Hat, the Kubernetes platform’s rate of adoption, breadth of capability, and overall utility was massively accelerated as it allowed for minimal effort and overhead to manage complex application lifecycle requirements in a manner of a few YAML documents applied to a given cluster. Users can now easily manage Day 2 operations of complex application dependencies as easily as they would begin learning a new programming language and their first “Hello World!” application therein.

With the shift to Kubernetes, away from Cloud Foundry or custom container management solutions, the need for bespoke service discovery and service proxy capabilities was eliminated, which I covered in part 2 of my article series. Kubernetes provides built-in service discovery and service proxy mechanisms, which allows a developer to deploy a container (wrapped in a pod) and expose it via a service construct. This service construct allows Kubernetes to natively handle the service proxy capability from the service to the pod while providing DNS entries to allow for distinct routing from dependent services, which therefore replaces the need for traditional service discovery.

Now that we’ve covered the big nautical-themed elephant in the room (as ‘kubernetes’ originates from the ancient Greek word for ‘helmsman’), let’s move on to some of the more distinct changes over the past six years since the last article in this series was written and see what some of the latest improvements in microservices and containers have been.

What’s New

Here, we’re going to dive into more of the “smaller, faster, stronger” metaphor and see why these technology patterns and products are important to the advancement of microservices and why they’ve been adopted so quickly!

Quarkus

Quarkus is a project focused on smaller, leaner, more efficient internal-container runtimes with “the mission of making Java the preferred framework for Kubernetes-native development” (as they say on their home page). Quarkus is the result of multiple iterations of this type of capability focusing on container-first development, but this works to bring the excellent developer experience of Cloud Foundry to Kubernetes with new developer tools and optimized performance, narrowing the gap between when a developer writes code and then has it running in a container on Kubernetes, following best practices without writing explicit deployment YAMLs.

Quarkus is built to leverage traditional JVM deployment patterns, but also has the option to build executables utilizing GraalVM, which is a universal virtual machine with the ability to compile JVM bytecode and turn it into a native executable. What this means, and why everyone is so excited for Quarkus, is that you get to code with the power of Java and then deploy with the minimal footprint and enhanced performance of native executables.

It doesn’t get much better than smaller and faster with the adoption of one project!

Reactive microservices

One of the reasons that explicit service discovery and service proxy capabilities were so important in the early days of microservices is that the vast majority of microservices were written with REST-based API interfaces, which require point-to-point communication between services with the need to establish synchronous communication between participants. If systems were unaware of other dependent services in their call chain, they couldn’t do their job (much like you can’t have a productive conversation with just yourself)…unless you’re doing some rubber ducky debugging.

The shift away from this standard REST-based application landscape began with the Reactive Manifesto in 2013, which attempted to define systems with four higher-level characteristics: responsiveness, elasticity, resiliency, and message-driven. The intent of the reactive manifesto wasn’t explicitly to eliminate REST-based systems, but to address a number of architectural concerns that REST-based systems presented in terms of responsiveness, elasticity, and resiliency – strictly due to the synchronous nature of them.

Allowing systems to communicate in an asycnhronous fashion, using message-driven protocols, allowed microservices to provide much larger scale, much higher resiliency, and more manageable scalability. The adoption of reactive microservices across the board also led to the crystallization of what exactly event-driven architectures are, how they differ from traditional systems, and the importance of integrating highly-resilient, high-throughput cloud-native messaging systems in modern-day microservices architectures.

In other words, the Reactive Manifesto led to a movement of building stronger microservices from day zero!

Service mesh

Istio is an open source service mesh that extends Kubernetes to help simplify and standardize traffic management, telemetry, and security for all of your microservices, whether that is dozens or thousands microservices, all from a single service mesh.

What is a service mesh you ask? Istio defines it this way: “A service mesh is a dedicated infrastructure layer that you can add to your applications. It allows you to transparently add capabilities like observability, traffic management, and security, without adding them to your own code. The term service mesh describes both the type of software you use to implement this pattern and the security or network domain that is created when you use that software.”

More simply, a service mesh allows you to deploy microservices into your architecture without explicitly needing to code into the application all the requirements of observability, traffic management, security, or policy, because the service mesh that is a part of your infrastructure will take care of all of that for you, outside of your containerized application instances. These powerful capabilities are handled in a number of ways and through the use of different patterns, like the sidecar technique to inject assistance containers into your Kubernetes deployment specifications.

As with the Kubernetes landscape covered above, there are numerous service mesh options out there as well, with Istio leading the field. Red Hat provides Red Hat Service Mesh as an enterprise-supported offering (akin to their enterprise Kubernetes offering, Red Hat OpenShift), while startups like solo.io have spun off to focus on just service mesh capabilities and are completely built on top of the potential that Istio provides.

What better way to build stronger microservices if you really don’t have to do anything at all to get there!

Serverless

What’s smaller than zero? That’s right, nothing! We won’t get into the abstract concept of a negative amount of physical real-world objects… we’ll leave that up to a separate philosophical discussion. However, we can talk about how the zero-overhead movement has taken charge in microservices and when coupled with some of the other advancements already discussed in this article, you get the serverless or scale-to-zero patterns that are patently popular today.

When the AWS Lambda service was introduced to the public at AWS re:Invent in 2014, to say it was a game-changer is a bit of an understatement. Since that day, the microservices landscape hasn’t been the same! Writing any type of code you want (as long as it can run in a container) and letting some system somewhere else manage the execution of it was exactly every developer’s dream! All of this, while only paying for what you use with nothing running when active processing isn’t taking place, that is budgetary nirvana!

As other serverless offerings came to grow to compete with AWS Lambda, stateless microservices started to become synonymous with serverless and functions-as-a-service (FaaS). Offerings like IBM Cloud Functions, Azure Functions, Google Cloud Functions, and open-source Apache OpenWhisk quickly brought capability parity to any platform within a few years.

There are many reasons to learn and design with serverless microservices, but that doesn’t mean they are perfect for every situation – just like microservices in general. If your workloads are stable or predictable in size, you generally won’t receive the financial benefits of running in a serverless environment over the long-term in contrast to unpredictable workloads and serverless platforms scaling in response. Additionally, one downside of serverless and functions-as-a-service is magnified when you have stateful microservices that either require a longer “cold start” time when starting from scratch or something that requires long-term in-memory state management. One final caveat for serverless offerings is the implicit caution against vendor lock-in when using cloud provider-specific serverless offerings, which can lead to deeply integrated architectural decisions that can be impacted severely should the offering change capabilities, requirements, or pricing.

To avoid some of these cons of serverless and FaaS, the open-source offering kNative was created to bring deployment and management of serverless workloads to Kubernetes. This add-on to traditional Kubernetes allows users to deploy scalable, stateless microservices through traditional Kubernetes YAML deployment models, with direct integration to existing logging, monitoring, management, and all the other sub-systems that Kubernetes provides out of the box. kNative brings the capability of avoiding vendor lock-in, while promoting reactive and event-driven microservices, all on top of the existing container orchestration platform you are already using.

That’s quite a lot of power provided by a capability that I started describing as you can’t get much smaller than zero!

Conclusion

In summary, we’ve covered quite the transformation of the microservices landscape since 2015 while only talking about five specific advances in technology. If you’re thinking “What about X?” or “How come he didn’t mention Y?”, you’re right! I probably should talk about all of those things as well, but then this article would never end. We specifically focused on the technology aspects of microservices for this article and avoided the “people terms” of what microservices & containers bring to the table – like DevOps acceleration, enabling more seamless remote work among teams, and so on. All of those things are very important to the modern application development movement, however they don’t necessarily tie back into our established metaphor for why we care about microservices – smaller, faster, stronger– so we’ll leave those topics to another article.

With this update to the article series, I hope that you have been hands on with some of the technology covered here or that you are now excited to dive in and experiment with it even more. As any quality software development practice will tell you, microservices are not a one-size-fits-all approach nor is it a panacea for all your current application troubles. However, when you use microservices correctly, they can help you to build agile, flexible, adaptable, and performant enterprise-scale architectures with relatively low overhead.

For more details on the current happenings in, on, and around containers and microservices, explore the Containers and Microservices hubs of IBM Developer.