Kubernetes, you’ve reignited my passion for microservices!
A pattern is emerging in cloud native: Microservices
Since 2014, microservices have expanded to become a widely adopted, powerful application architecture. The growth of cloud platforms has played a major role in microservices adoption, since applications that are built to run on those platforms follow a set of cloud-native guidelines. Although there’s no single set of rules that define cloud-native, a pattern is emerging, one that relies heavily on microservices.
However, this growth is not without its challenges. Although microservices have proven to be an effective way to develop and manage applications in production, they come with certain drawbacks. In this post, I’ll outline some modern application architectures, show the advantages and disadvantages of microservices, and describe some community-driven solutions that tackle those disadvantages.
Microservices in modern application architectures
Although modern apps encompass many different architectures, I see two ends of the spectrum: traditional on-prem apps and cloud-native apps.
In the past, when a company set out to develop an application, they would purchase hardware and then use a platform standard, like Java EE, to create the application to reside on that hardware. Java EE is quite powerful; it was used to create complex and efficient – but also clunky and monolithic – applications. To this day, these applications continue to chug along and power a large number of the back ends for applications we still use. However, this approach is rapidly losing traction in favor of cloud-native applications.
On the other end of the spectrum lies cloud-native applications, which are often built using a microservices architecture. This new era of app development enables developers to choose the right language for the problem at hand. For example, developers might want to use Node.js to handle intensive IO operations through APIs but continue to use Java to handle computations of large numbers. This approach separates requirements and enables cross-functional teams to build each part of the stack using the preferred technology. With separate microservices making up each part of an application, scaling now becomes much easier and efficient.
So why doesn’t everyone just use microservices to build new cloud-native applications? Because it’s incredibly difficult to refactor an existing traditional application to become microservice-based and truly cloud-native.
The solution: Hybrid applications
The solution comes in the form of hybrid architectures. Instead of following a lengthy process to rewrite the whole stack, you can continue to use the established traditional application while breaking off pieces to take advantage of cloud-native concepts.
Let’s imagine a scenario: a traditional application is running into delays whenever the UI is accessed because the service that handles database access is becoming a bottleneck. Instead of scaling out the entire application, the developers choose to migrate the database access code into a new app based on Node.js, which performs well at handling large numbers of API calls asynchronously. They host this application in a public cloud and open secure access to the on-prem database. Finally, they scale up this individual service whenever they anticipate a high load on the application, allowing them to save big on server expenses. These developers are now effectively working on a hybrid application stack.
Eventually, more and more pieces are broken out of the monolith into their own microservices. This makes it easier for the team to develop future projects in a truly cloud-native fashion. For example, if a new mobile application needs to be developed and all the required components have already been refactored into microservices, they can proceed using a cloud-native approach.
A love/hate relationship
I love the microservices approach for a lot of reasons, a few of which I’ve just outlined. The major advantages that come with microservices include:
You can choose the right tool for the job:
- Node.js for simple API servers and asynchronous logic
- Java for computing large numbers or maintaining type-safe data
- Go, Python, Ruby, etc. for your specific needs
Then you can implement a non-restrictive technology stack — application-to-application communication is universal when using APIs or message queues.
Complexity allows you to:
- Assign responsibility of each microservice individuals or teams
- Have agile development with cross-functional teams addressing each microservice:
- Teams are composed of developers, testers, and devops engineers
- Track failures easily since components are separate and easily identifiable
- Separate components enable effective, cost-efficient scaling of individual pieces to respond to load
- Reduces risk of failure when pushing changes to production because each microservice can be redeployed without having to push the entire stack
- Allows for team-specific development cadences — different teams can deploy at different rates, whether weekly, bi-weekly, or monthly
These are all sound reasons to love microservices, but these same categories are also the reason why I started to hate working with microservices, especially in production. Each of these advantages comes with an unfortunate set of corresponding issues. Let me explain:
- Each additional language that your stack uses comes with a different technology stack (for example, NPM for Node.js, Maven for Java, etc.)
- There are custom build processes and test infrastructure for each microservice
- As microservices grow, operations can become a nightmare
- Each microservice (especially with different languages) has different memory, processing, and storage requirements
- CI workflows for each microservice is different and redeploying the full stack can be complex (particularly when moving to new geographies)
- There are many scaling policies to manage because each piece of the stack might have different scaling rules (CPU usage, garbage collection management, API calls per second, and so on)
- Multiple load balancers are required because each microservice is scaled individually but still needs consistent communication with other services
- Logging and analytics streams from multiple microservices must be managed in a consistent way
- There are multiple CI/CD pipelines to manage for each microservice
- Each team requires build expertise since requirements differ greatly
- Changes that span multiple microservices need to have coordinated deployments so upgrades can happen without causing downtime
These issues honestly made me start to re-think why I loved microservices in the first place. The approach was supposed to make things easier but my team and I were spending so much time on operations and fiddling with the various technology stacks that I was starting to think we should have stuck with a single language and platform.
I wasn’t the only one running into these issues; the community identified these same problems and started working towards solutions. Let’s talk about the solutions that are available today to tackle the issues in each category.
Although language choice seems like an obvious advantage, having to deal with the nuances of different languages can be a lot to manage. Luckily, there’s a really straight-forward solution to this: Docker.
Docker enables developers to neatly “box up” their applications into Docker containers. A Docker container comes with everything you need to run a microservice — not just the code and runtime but the system libraries as well. This enables you to put all the nuances and complexities of an application within the container. When it comes to execution, a Docker container behaves in a standardized way, no matter what language powers the actual source. Basically, a Docker container for a Java app can be dealt with a similar fashion as a Docker container for a Node.js app. No more issues with custom build processes; operations engineers can now work cross-functionally across the teams managing each microservice.
Tackling complexity and scaling
This is a big one: Docker was around for quite some time before a production-capable orchestration solution with first-class Docker support was adopted. But it finally happened with Kubernetes.
Kubernetes provides the tools for automating deployment, scaling, and managing Docker containers. It addresses various requirements that tie closely with the microservice downfalls I outlined earlier. Although there’s too many features to talk about in this post, a few key ones include:
- Load balancers for internal and external facing microservices
- Easy-to-use DNS management for microservice-to-microservice communication, particularly through API calls
- Auto health management with restart and retry policies for deployment and unhandled failures
- Rolling and blue-green deployments to ensure high availability
Kubernetes is effectively tackling almost all the issues with microservices that were troubling me and my team. There’s just one problem left — and although Kubernetes provides the tools for automating deployment, it’s not always easy to do.
The final piece of the puzzle comes with Helm, a tool for managing preconfigured Kubernetes-based deployments. Although Helm calls itself the “Kubernetes Package Manager,” it shouldn’t be mistaken for something like NPM, the Node.js package manager.
Helm provides “charts” that tell Kubernetes how to deploy a set of containers, along with features to enable rollbacks, provide repeatable app installs, and simplify updates to a running Kubernetes cluster. What Helm doesn’t do is provide a registry for hosting your Docker containers, but just the charts themselves. There is a community repository, but it’s quite easy to set up a private repository for hosting your Helm charts as well. The charts are configured to tell a Kubernetes cluster which registry to pull images from.
Putting the pieces together
With their powers combined, Kubenetes, Docker, and Helm have tackled all the issues that I ran into over the years with microservices, without sacrificing the advantages. Although it seems like we’re adding even more technology to the stack, once you’ve configured these tools they’ll greatly streamline your team’s development and management workflow.
Once you’ve deployed a microservices web application to Kubernetes, you can expect it to look something like this:
A user accesses the web application through an Ingress load-balancer provided by Kubernetes. It gets routed to the web application served by Node.js, which uses two other microservices on the back end, a Java Microprofile.io service and a Python service.
This is just one example of how a microservice-based web application might look within a Kubernetes cluster. To get get started with deploying an application to Kubernetes, check out these resources:
- Scalable web application on Kubernetes
- Code patterns: Kubernetes and container patterns built by IBM Developer experts
In the next part of this blog series, I’ll tackle how Istio helps developers manage their microservices in production. I hope you’ve enjoyed reading about my experience with microservices. I’d love to hear about your own experiences — feel free to reach out to me directly on Twitter at @Sai_Vennam if you have any questions or comments.