6 reasons why Open Liberty is an ideal choice for developing and deploying microservices

Introduction

When you first consider microservices, you’d be forgiven for thinking that the only thing that’s different is the size of the application. After all, it’s just lots of little applications talking to each other, right? It’s only once you dig a little deeper and maybe get your hands dirty that you begin to appreciate that creating, containerizing, deploying, and managing lots of interconnected applications surfaces its own unique set of requirements.

IBM® WebSphere Liberty was created in 2012 in response to increasing demand for a lightweight, agile runtime. In 2017, due to growing demand for open source, Open Liberty was created — the upstream, fully-capable open source distribution of Liberty. WebSphere Liberty still exists, delivering additional features that make it easier to modernize traditional enterprise applications, but for most applications Open Liberty provides all you need.

Liberty greatly simplifies the development and deployment of applications over traditional application server runtimes. It provides the ability to create right-sized deployments, from under 24MB all the way up to full Jakarta EE/Java EE support. Its zero-migration architecture removes version-to-version migration challenges, auto-tuning delivers optimal performance without costly tuning cycles, it’s simple to configure, and much more. These characteristics are ideal for cloud deployments, and with the addition of first-class container support and cloud-native APIs, such as MicroProfile, Liberty is the perfect choice for new microservice-based applications.

Let’s look in a bit more detail at the top six reasons why Liberty is an ideal choice for cloud-native microservices.

Reason 1. Right-size images – no extra baggage

When deploying microservices, their resource consumption (CPU, memory, and so on) directly equates to cost. If you’re deploying tens or hundreds of microservices where you once had one monolithic application, you now have tens or hundreds more instances of the runtime. It’s therefore important that your microservices consume resources appropriately. If you’re deploying hundreds of small microservices, you don’t want each microservice to pull in hundreds of megabytes of server runtime and libraries.

Liberty is a fully modular runtime, letting you pick and choose the capabilities you need for your application. With Liberty, you have one runtime, one approach, to developing and deploying applications that scales from small microservices all the way up to full modern enterprise monoliths, and anything in-between.

The following table shows disk and memory measurements for three example runtime packages. The first row contains all the latest APIs for both Java EE/Jakarta EE and MicroProfile (all you might need for a modern cloud-native monolith), the second row contains enough runtime to support MicroProfile 3.3 (all you might need for a typical microservice), and the third row contains enough runtime to run Servlet 4.0 (the absolute minimum you might need to run a simple web framework). You can see that as we right-size Liberty for a reducing set of needs, the disk and memory requirements also decrease, which is exactly what you would want from a runtime.

Package Size on disk Memory
Java EE 8/Jakarta EE 8 + MicroProfile 3.3 121MB 165MB
MicroProfile 3.3 59MB 113MB
Servlet 4.0 24MB 72MB

Reason 2. Low operating cost – less memory, higher throughput are key

The move from monolithic applications to architecture styles resulting in tens or hundreds of smaller applications being deployed has led to a change in importance of different runtime performance characteristics. You may hear a lot about “cold startup,” and this is critically important for cloud functions (Liberty’s time to first response is approximately 1 second, so it’s no slouch), but for microservices it’s likely each running instance will serve thousands of requests before it’s replaced. Given the expected usage profile of tens to hundreds of services with thousands of requests, the most important performance metrics are memory consumption and throughput; these are the ones that will have the biggest impact on cost.

Memory footprint comparisons

The following figure shows memory footprint comparison after startup for servers running the Acme Air Microservices benchmark. In this scenario, Liberty uses 2.5x-10x less memory than other runtimes.

Bar graph memory footprint comparison of Open Liberty, WildFly, TomEE, Payara, and Helidon

If you’ve chosen Spring Boot for your application, then there’s approximately a 2x memory footprint benefit from running on Liberty, as you’ll notice in the following figure that shows the relative memory usage when running the Spring Boot Petclinic application under load with a 4GB heap.

Bar graph showing a 2x memory footprint benefit when running Spring Boot Petclinic on Liberty, versus Tomcat, under load

Throughput comparisons

Liberty also has significant throughput benefits when compared to other runtimes. The following figure shows throughput measurements against the Acme Air Microservices benchmark. Liberty has the overall best performance across the runtimes tested and is significantly better than most of them.

Bar graph showing Liberty throughput better than WildFly, 2x better than TomEE, and 5x better than Payara and Helidon

The following figure shows an almost 2x throughput benefit when running the Spring Boot Petclinic application on Liberty, rather than Tomcat.

Bar graph showing a 2x throughput benefit when running Spring Boot Petclinic on Liberty, versus Tomcat, under load

Combining the memory and throughput benefits equates to an over 4x saving (throughput per MB) for this Spring Boot application on Liberty and a 3.5x benefit over the nearest of the other runtimes compared. That’s potentially a greater than 3x saving on Cloud and license costs.

Reason 3. Continuous Delivery – low maintenance, zero technical debt

The move to cloud-native and microservices often leads to a shift in runtime ownership responsibility. A single multidisciplinary team develops, packages, and deploys an application (for example, microservice) as an immutable container including the server runtime — even Spring Boot applications embed a server (for example, Tomcat, Jetty, or Liberty). The operations team manages a Kubernetes-based cloud platform, such as Red Hat OpenShift, into which the applications teams deploy their containers.

In the new cloud-native world, what is often not appreciated, until it’s too late, is the development team is now responsible for the maintenance of the runtime, which is now part of the container contents. Previously, the operations team would manage the smaller number of server runtime instances, carefully rolling out major version or service releases upgrades, and ensuring critical security ‘interim fixes’ were applied. Now, development teams delivering tens or hundreds of applications, each embedding a runtime, are responsible for ensuring those runtimes are kept current and free from vulnerabilities. So how does Liberty help with this problem?

The first part of the answer is to release often. Liberty has a ‘continuous delivery’ release cycle, shipping a new release every four weeks. Any fixes shipped for the previous release are automatically rolled into the next. So with continuous delivery, there’s no need to apply service — you get it automatically. Every release of Liberty is made available in Maven Central and Docker Hub, making it much simpler to pick up the latest through build automation. Development teams can simply rebuild their containers to pull in the latest release, confident it contains fixes to the previous version. The second part of the answer is ‘zero migration,’ which is discussed in the next section.

Of course, if you’re not changing the way you deliver your applications but still want the benefits of Liberty, then the Liberty release cycle and support options enable that too. Every release of Liberty comes with five years of support, and every third release in a year (versions ending 3, 6, 9, & 12) comes with two years of ‘interim fix’ support, enabling a more traditional update cycle. There’s also no need for support extensions because each new four-weekly release resets the five-year support clock. For more details, see the Liberty Support Policy.

Reason 4. Zero Migration – staying current, effortlessly

Liberty is the only Java runtime that provides ‘zero migration.’ Zero migration means, in just a matter of minutes, you can move up to the latest Liberty without having to change your application code or configuration. Historically, these kinds of moves filled teams with dread, having to be planned months in advance and taking over a year to complete — that’s a lot of investment just to stay current.

Liberty enables zero migration through the use of versioned “features.” In Liberty, APIs are provided by features; for example, there’s a feature for servlet-3.1. When new capabilities are introduced that would break existing applications, or a new specification version comes out, Liberty provides a new feature. So when Java EE 8 came out and there were breaking changes, a servlet-4.0 feature was created alongside the servlet-3.1 feature, and an application can choose to use one or the other. Migrating your application is therefore a separate decision from updating the level of Liberty. If you want to move up to the latest level of Liberty, but not migrate your application and configuration, you can continue to use the same features (for example, servlet-3.1). This means you can pick up the latest runtime fixes without having to go through a painful migration. When you’re ready to take advantage of the latest APIs (for example, servlet-4.0), you can update your server configuration and application to use it.

Coupling continuous delivery with zero migration means you can remain current with fixes, with minimal investment, and choose to invest in application changes when there’s a business need and benefit to doing so.

Reason 5. Optimized to Kubernetes – auto-tuning, native integration

The majority of large enterprises are now running Kubernetes in production, and containers and Kubernetes are seen as integral to the delivery of cloud-native microservices. Continuous delivery of container-based microservices needs a runtime that supports container and container orchestration (Kubernetes) best practices. It needs to be easy to:

  • Achieve the best performance without knowing where the container will be deployed.
  • Create and maintain secure and supported images.
  • Integrate the runtime and application with the container orchestration environment.

Without these qualities, microservices will be inefficient, difficult to maintain and secure, and difficult to manage. The following sections explain how Liberty helps deliver the ideal Kubernetes experience.

Auto-tuning runtime

When deploying to public or private cloud container environments, it’s important to be able to get the best performance out of the application. This is nothing unique to containers, but containers make tuning more challenging. To address this, Liberty does two things:

  • It provides great defaults that you are highly unlikely to ever have to change.
  • Its thread pool is auto-tuning, optimizing for the environment in which it finds itself running.

The following figure shows Liberty throughput based on different latencies of requests. You can see that the auto-tuning quickly reaches the optimal performance. And if latency were to change over time, Liberty would adjust appropriately.

Line graph showing Liberty throughput quickly reaches peak performance by auto-tuning its thread pool

Production-ready images

For each new release of Liberty, new Universal Base Image container images are uploaded to Docker Hub. The images are free to distribute — OCI-compliant (Open Container Initiative), with support available if you choose. Images for Open Liberty and WebSphere Liberty are uploaded and maintained based on Liberty’s support policy. Fixes to critical security vulnerabilities are automatically applied and the images updated, so that you don’t have to.

If the images aren’t exactly to your liking, then the Open Liberty and WebSphere Liberty Dockerfiles and scripts used to create them are open source and available for your own customization, and all of the required artifacts are available in public repositories like Maven Central and Docker Hub.

Kubernetes integration

Kubernetes has rapidly become the orchestration technology of choice when moving to containers. Kubernetes-based platforms, such as OpenShift, enable you to deploy containers, scale them up or down, and even scale them to zero.

Operators are the Kubernetes management approach of choice. The Open Liberty Operator enables first-class management of Liberty within Kubernetes and OpenShift. It greatly simplifies deployment and configuration of applications, clustering, persistence integration, service binding, problem determination, and much more.

Some aspects of container lifecycle and traffic management need help from the runtime inside the container. For example, Kubernetes will restart a container if it believes it is dead and won’t dispatch requests to a container if it believes it is not ready. Kubernetes Liveness and Readiness probes provided by the container are used to indicate the status of the container/application. The MicroProfile Health support in Liberty is designed to make it incredibly simple to provide this level of container orchestration integration.

Reason 6. Developer tools that help, not hinder – container-aware, CD-focused

A first-class developer experience is an integral part of efficiently delivering new function and fixes. For many years, Liberty has had a strong focus on helping developers be productive with their tools of choice. Liberty integrates with the most popular build tools — Maven and Gradle — including releasing all the runtime binaries to Maven Central. Liberty’s Maven and Gradle support also provides, ‘dev mode,’ which means developers can make code and configuration changes and have those take immediate effect on a local running server, even a server running in a local container. This removes the need for a full rebuild and redeploy, greatly reducing the time taken to develop and test updates; and the container support means you can develop and test in an environment that’s closer to production.

Testing of cloud-native applications introduces the need to do in-container testing. Doing true-to-production integration testing inside a container greatly reduces the chances of issues reaching production. ‘MicroShed‘ testing enables exactly that, with the ability to run JUnit integration tests against a Liberty application running in a container, including integration with containers running databases and Kafka.

Dev mode enables hot deploy, hot tests, hot debug, and more in the most popular code editors — IntelliJ, Eclipse, VS Code, even vi. Combining dev mode with the MicroProfile APIs and MicroShed testing gives you a holistic cloud-native developer experience. We are continuously delivering new capabilities to make your developer experience a blast!

Watch the following video to see how the Open Liberty Tools for IntelliJ extension makes it even easier to develop with Open Liberty dev mode in IntelliJ:

Conclusion

The move to finer-grained cloud-native microservice deployments leads to new challenges for development and operations teams: the need for runtimes with high throughput and low resource usage; the need for first-class container and Kubernetes integration; the need to be able to remain secure by upgrading simply and frequently; and more.

This article has highlighted six capabilities of Liberty that address these new challenges:

  • Right-size runtime
  • Low operating cost
  • Continuous delivery
  • Zero migration
  • Kube-optimized
  • Great developer experience

These capabilities make Liberty an ideal choice for microservice development and deployment.

Next steps

If you want to experience Liberty for yourself, check out the hands-on Open Liberty guides.