Deploying MicroProfile and Jakarta EE apps with Open Liberty to any cloud

Open Liberty is an open source runtime built specifically for cloud-native applications. It is lightweight and performant, saving you money, optimized for modern DevOps practices through its continuous delivery release cycle, and has a zero migration architecture. It is optimized for container and Kubernetes environments, making it great for cloud-native applications. Open Liberty supports the latest versions of open cloud-native Java specifications, such as MicroProfile and Jakarta EE.

If you want to give Open Liberty a try, check out this “Getting started with Open Liberty” guide that gives you a taste of what this runtime has to offer. If you need support for Open Liberty, it is available through the WebSphere Hybrid Edition license, with no need to switch to deploy WebSphere Liberty.

In this article, we discuss the main ways that teams deploy applications and runtimes in a cloud-native manner:

  • Locally, on Bare Metal servers or virtual machines
  • Container technologies, such as Docker, podman, or Rancher
  • Container orchestration, such as Kubernetes or Red Hat OpenShift

Open Liberty is designed to run anywhere, so in reality your options are limitless. These applications can be monoliths, microservices, or anything in-between.

Deploying locally (Bare Metal servers or virtual machines)

Traditionally, runtimes have been deployed on virtual machines or bare metal servers but with the rise in popularity of the public cloud and microservices this has changed. This does not mean there are not use cases for deploying applications to the cloud using bare metal or virtual machines. For example, if you are already using virtual machines, have expertise and assets in this technology, or are not looking to move to containers yet, then this could be the best option.

Deploying using container technologies (Docker or podman)

You can easily deploy your cloud-native applications and microservices to different environments in a lightweight and portable manner by using containers. From development to production and across your DevOps environments, you can deploy your applications consistently and efficiently with containers. Each container is a package of what you need to run your microservice or application, from the code to its dependencies and configuration. Open Liberty has pre-built container images that are available on Docker Hub and the IBM Container Registry for all different types of architectures.

Developing with Open Liberty and containers is super easy with tools such as the Open Liberty Maven and Gradle plugins that allow you to develop inside a container in development mode. Development mode (https://openliberty.io/docs/21.0.0.12/development-mode.html) (dev mode) enables developers to make application, test, and configuration code changes without having to restart the runtime. This allows developers to focus on what matters most, the application code!

When considering the use of containerization technologies, we are often also considering cloud deployment of these applications. So, we also need to consider the JVM we’re using for our applications as well as the runtime, and whether this is best suited to run in the cloud. IBM Semeru Runtimes are builds of OpenJDK with Eclipse OpenJ9 VM that are optimized for hybrid cloud workloads.

Deploying with container orchestration (Kubernetes)

Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications. Managing individual containers can be challenging and Open Liberty, along with MicroProfile and Jakarta EE, work brilliantly with Kubernetes. A small team can easily manage a few containers for development but managing hundreds of containers can be a headache, even for a large team of experienced Operations or DevOps teams.

Kubernetes is a tool for orchestration of containerized workloads. It handles scheduling, deployment, and mass deletion and creation of containers. It provides update rollout capabilities on a large scale that would otherwise prove extremely tedious to do. Imagine that you updated a container image, which now needs to propagate to a dozen containers. While you could manually destroy and re-create each individual container, you can also run a short one-line command to have Kubernetes make all those updates for you.

Now back to the point that you can deploy Open Liberty everywhere. All major public clouds provide a hosted Kubernetes service, although they can have slightly different methods of deploying containers.

Deploying to IBM Cloud

In this video, I demonstrate how to build a Java-based cloud-native app, package it in a container, and then deploy it to IBM Cloud using Open Liberty.

Read more about deploying to IBM Cloud using Open Liberty in this guide.

Deploying to Azure

In this video, I demonstrate how to build a Java-based cloud-native app, package it in a container, and then deploy it to Azure using Open Liberty.

Read more about deploying to Azure using Open Liberty in this guide.

Deploying to AWS

In this video, I demonstrate how to build a Java-based cloud-native app, package it in a container, and then deploy it to AWS using Open Liberty.

Read more about deploying to AWS using Open Liberty in this guide.

Deploying with container orchestration (OpenShift)

Kubernetes is great but cloud providers tend to have their own ways of deploying and managing containers. This can be time consuming and error prone for operations or DevOps teams if they need to deploy to multiple environments, public could, or on-premises. Using a hybrid cloud model, where you require multiple services for things such as security, storage, caching, and observability, means you will also be required to manage these services that are specific to each cloud provider. This is where Red Hat OpenShift Container Platform can help since it acts as a layer on top of Kubernetes to help standardize and add extra functionality and services to Kubernetes such as the previously mentioned services. This then allows you to have the same processes and deployment model no matter where you are deploying Open Liberty in a container technology.

There are multiple ways to deploy Open Liberty on OpenShift and Kubernetes, but the most common method is using the Open Liberty Operator. Kubernetes operators provide an easy way to automate the management and updating of applications by abstracting away some of the details of cloud application management.

In this video, I demonstrate how to build a Java-based cloud-native app, package it in a container, and then deploy it to OpenShift using the Open Liberty Operator.

Read more about deploying to OpenShift using the Open Liberty Operator in this guide.

Summary

In this article, we covered just a few of the most popular ways you can use to deploy Open Liberty on the cloud. But, generally, anywhere you can run Java, you can run Open Liberty.

Open Liberty has been designed to be as flexible as possible by giving you the choice of where you want to run your applications. Being generally the first to certify in specifications such as MicroProfile and Jakarta EE with the ability to deploy anywhere allows choice and flexibility for developers and organizations to take advantage of cloud-native Java.