Delivering cloud-native Java applications can be a complex task, and one that can sometimes seem overwhelming. But this is where useful cloud-native tools and technologies can really help to simplify this process and enable more effective applications designed to thrive in the cloud. In this article, we'll explore the open source cloud-native runtime, Open Liberty, and the features and tools it offers that make cloud-native developers' lives easier -- from getting started through coding, testing, deployment and monitoring in production.
Why does this all matter for developers?
A key aspect of the adoption of cloud-native is the adjustment in organizational responsibilities. Historically, teams were siloed into separate Dev and Ops. Dev would create the application and then pass it to the Ops team to put into production. The move to cloud-native encompasses the shift to DevOps, or even DevSecOps, where multidisciplinary teams are responsible for the continual operations, including ensuring it remains secure, in addition to development responsibilities. It is important that you have the right -- if not ideal -- framework and tools to build and deliver what you have been tasked with.
What is Open Liberty?
Open Liberty is a lightweight, open source, cloud-native Java runtime that is built by using modular features. It supports a wide array of open source and open standard APIs, including:
MicroProfile -- An open source project that defines new standards and APIs to accelerate and simplify the creation of microservices.
Jakarta EE and Java EE -- Open source and open standard APIs for building enterprise and cloud-native applications.
Spring Framework and Spring Boot -- Open source framework building on Java EE APIs for developing enterprise and cloud-native applications. See our community blog for performance benefits you could gain from running Spring applications with Open Liberty.
This makes Liberty an excellent choice for various different application types, enabling you to use one runtime to easily build applications with your choice of APIs. If you're still trying to decide on a cloud-native runtime and aren't sure if Open Liberty is for you, check out:
So, you've chosen Open Liberty as your cloud-native runtime and want to know how you can most effectively get started using it to develop your cloud-native application. The first step is to set up your environment. You'll probably be using your favorite IDE to develop and code your application, so you'll want to start by installing the useful Liberty tools for your IDE of choice. These tools offer additional in-editor support for your Open Liberty projects, enabling commands to easily start and stop Open Liberty in dev mode (more on this later), as well as run tests and view test reports. Setup tools:
Now that you've set up your IDE with our Open Liberty IDE tools, the next step is actually getting hands-on and starting to write an application using Liberty. We've tried to make this as easy as possible by providing a starter tool.
The Open Liberty starter gives you a simple, quick way to get the necessary files to start building an application on Open Liberty. Simply visit Open Liberty and select a few options to generate a starter application that runs on Open Liberty. You can specify your application and project name, choose a build tool from either Maven or Gradle, and pick which version of Java SE, Java EE, Jakarta EE, and MicroProfile your application will use. Then just click Generate Project, and you are ready for lift-off! A simple RestApplication.java file is generated for you to start creating a REST-based application. A server.xml configuration file is provided with the necessary features for the MicroProfile and Jakarta EE versions you previously selected, along with the Maven or Gradle build files and a Dockerfile to build the application into a container.
Build plug-ins, Liberty artifacts, and common repositories
Now that you've got a simple Open Liberty application to build your Java microservice or cloud-native application, it's time to learn how you can utilize build tools to be able to actually build your application. There are various options for this, but two of the most popular build tools are Maven and Gradle. Open Liberty applications can be built with either of these tools, and we have developer guides to walk you though using both:
Working with Maven or Gradle can also help you to more easily make use of the Liberty plug-ins for these build tools to get dependencies needed for Liberty from the central repository (also known as Maven Central).
Containerization
You may also be looking to make use of containerization for your application, especially if you're considering a microservice-based architecture. Containers enable you to easily deploy microservices in different environments in a lightweight, portable, and consistent manner. You can run a container from a container image. Each container image is a package of what you need to run your microservice or application, from the code to its dependencies and configuration.
If you're looking to build and deploy your applications in containers, you could make use of the pre-built Liberty base images that are available in IBM Cloud Registry (ICR) and Docker Hub. These base images provide a great foundation from which you can easily customize and build on top of them. It is worth noting that Docker Hub has a rate limit, but ICR doesn't. Pre-built images:
To learn more about about how you can containerize your application with technologies like Docker or Podman, and want to get hands-on with these technologies, check out our two guides:
When containerizing an application, it is often common for a single multidisciplinary team to develop, package, and deploy an application (for example, microservice) as an immutable container, including the application runtime -- even Spring Boot applications embed a server (for example, Tomcat, Jetty, or Liberty). However, what is often not appreciated is that as a result, the development team is now responsible for the maintenance of the runtime, which is now part of the container contents, including ensuring those runtimes are kept current and free from vulnerabilities. This can be pretty challenging.
To help with this, Liberty has a continuous-delivery release cycle, shipping a new release every four weeks. Any fixes shipped for the previous release are automatically rolled into the next. So with continuous delivery, there's no need to apply service (individual fixes or patches) because you get it automatically. Every release of Liberty is made available in Maven Central, IBM Container Registry, and Docker Hub, making it much simpler to pick up the latest through build automation. Development teams can simply rebuild their containers to pull in the latest release, confident it contains fixes to the previous version.
Zero migration
One of the major challenges for teams that work with cloud-native Java runtimes or frameworks is the need to continually update to the latest release. These updates are often required to resolve security vulnerabilities or bugs that can cause outages. Historically, these kinds of moves filled teams with dread, having to be planned months in advance and taking over a year to complete -- a lot of investment just to stay current.
One of the unique features of Open Liberty is its zero migration architecture, which supports full compatibility between runtime versions, letting you focus on what's important instead of APIs changing underneath you. With zero-migration architecture, in just a matter of minutes, you can move up to the latest Liberty without having to change your application code or configuration.
How is this achieved? Liberty enables zero migration through the use of versioned "features." In Liberty, APIs are provided by features; for example, there's a feature for servlet-3.1. When new capabilities are introduced that would break existing applications or a new specification version comes out, Liberty provides a new feature. So when Java EE 8 came out and there were breaking changes, a servlet-4.0 feature was created alongside the servlet-3.1 feature, and an application can choose to use one or the other. Migrating your application is therefore a separate decision from updating the level of Liberty. If you want to move up to the latest level of Liberty, but not migrate your application and configuration, you can continue to use the same features (for example, servlet-3.1). This means you can pick up the latest runtime fixes without having to go through a painful migration. When you're ready to take advantage of the latest APIs (for example, servlet-4.0), you can update your server configuration and application to use it.
Rapid, iterative development (a fast inner-loop experience)
Dev mode
As agile developers, we often want to be able to view the effects of our code as we develop small parts of our application or microservices. This is where Open Liberty's development mode can help. Dev mode allows you to develop applications with any text editor or IDE by providing hot reload and deployment, on-demand testing, and debugger support.
When using dev mode, your code is automatically compiled and deployed to your running server, making it easy to iterate on your changes. You can run tests on demand or even automatically so that you can get immediate feedback on your changes. If your destination is containers, dev mode in containers gives you all these things with the server running on a container, which gives you a near-production developer experience, reducing the risk of discovering container-related problems late in the cycle.
You can also now use dev mode with container support: devc. With container support, you can develop applications on your local environment while your Open Liberty server runs in a container. To get hands-on trying out this feature, check out the guide Using containers to develop microservices.
Standards APIs
To develop your application, you need a rich set of APIs that will help you be productive and address the unique needs of cloud-native applications. Choosing APIs that are developed by an open community, in open source, and with multiple implementations helps you build code and skills that are reusable across projects and environments, not just locked in to one vendor's technology stack. MicroProfile and Jakarta EE are the only options that fit these criteria. Developed as open collaborations by the enterprise Java community under the governance of the Eclipse Foundation and with many open source implementations available, they give you the APIs you can rely on for your cloud-native application needs.
To find out more about MicroProfile and Jakarta EE, check out these resources:
Open Liberty provides best-in-class support for both of these specifications, along with many other open source frameworks for developers to use. To view our hands-on guides that showcase how to make use of each API, check out the following resources:
A key aspect of agile delivery is continuous integration, and to do this effectively you need good testing from the start. Effective testing ensures that as our applications evolve, we can trust in the quality of what we are delivering. There are many kinds of tests we can introduce into our application with varying scopes: unit tests, integration tests, system tests, etc. There are also several open source frameworks that can enable more effective or standardized testing, such as JUnit, TestContainers, MicroShed Testing, etc.
To learn more about the different types of testing and how to enable each of them, view our series:
Next, you may be considering deploying your application to a cloud platform. There are a fair few cloud platforms to choose from, and some may suit your needs better than others. However, no matter which cloud provider you choose to use, you can run your Open Liberty application on any of them. You can deploy your application using virtual machines, or most likely in a Kubernetes-based container orchestration platform in public or hybrid cloud -- such as Red Hat OpenShift, Rancher, Azure Kubernetes Service, IBM Cloud Kubernetes Service, AWS Elastics Kubernetes Service, and more.
Deploying to Kubernetes platforms
Utilizing container orchestration tools or platforms can help to automate many tasks involved in deploying, managing, and scaling containerized applications. To try out using Kubernetes or Red Hat OpenShift, refer to the guides:
Alternatively, you could make use of operators. Operators are extensions to Kubernetes that are customized to simplify and automate tasks beyond the initial automation that Kubernetes or OpenShift provides. The Open Liberty Operator helps you deploy and manage applications on Kubernetes-based clusters. You can find out more about the Open Liberty Operator. To try out the operator for yourself, visit these guides:
If you've already selected a cloud platform and want to know more about how you can deploy a cloud-native Java application built with Liberty to it, check out the guides and tutorials below:
Source-to-Image (S2I) is an OpenShift toolkit for building reproducible container images from source code. S2I produces ready-to-run images by injecting source code into a container image and letting the container prepare that source code for execution. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments. For more infomation:
Alternatively, you could make use of Paketo buildpacks, which transform application source code into container images and help to easily keep them updated. Paketo buildpacks implement the Cloud Native Computing Foundation buildpack specification to provide toolkits and workflows for building reproducible container images from source code. The Paketo Liberty buildpack provides the Open Liberty runtime to a workflow that produces an Open Container Initiative (OCI) image that can run just about anywhere. You can find out more about this in our blog Introducing the Paketo Liberty Buildpack.
Monitoring your application
Observability/Telemetry
Now that your application has been deployed to your chosen platform, it's time to consider observability and telemetry for your application. Given the remote and distributed nature of cloud-native applications, it's important that we enable effective observability or telemetry into our applications to gain important insights into the health and performance of our apps. You can utilize various APIs or tools to enable this within your Open Liberty application, including MicroProfile's health and metrics APIs, Jaeger and Zipkin to name a few.
To see how you can implement these within an Open Liberty application, see the guides:
It can also be useful to utilize a dashboard platform or tool to effectively visualize these metrics and insights. There are a wide variety of tools available for this, including Grafana, Instana, Splunk, Kibana, the Elastic (ELK) stack, etc. Open Liberty is able to work with a wide variety of these tools. For example, there is a Grafana Open Liberty dashboard that you can make use of.
To find out more about how you could make use of Grafana with an Open Liberty application, see the following blog series:
Instana can also be used to monitor Liberty. To learn more about effective application monitoring using Instana, check out our Observability, insights, and automation series. You can install the Instana agent on the same host as your Liberty instance, and it will automatically discover Liberty and begin collecting metrics from it. For instructions, check out Setting up and managing Instana and Monitoring WebSphere Liberty.
Or, alternatively, if you're interested in finding out more about how you could utilize other tools and platforms, take a look at the following resources:
As you can see throughout this article, there are tons of features, tools, and supporting technologies that enable developers using Open Liberty to create their Java applications and microservices effectively for a cloud-native environment. From initially starting up and deploying your application for the first time to utilizing dev mode and containerization for effective agile development to deployment in the cloud and monitoring.
Next steps
If you are eager to try many of the features described in this article in a hands-on manner, we have a workshop you can try out. This deep-dive workshop enables you to try out many of the APIs, tools, and features mentioned throughout this article, with each lab building upon the last: A technical deep dive on our cloud-hosted environment.
Although this article covers a lot of the development process, there are additional factors that need to be taken into consideration and enacted within your application (e.g., security, configuration injection, etc.). There are several guides on the Open Liberty website that show how you can enable these additional behaviors and APIs.
About cookies on this siteOur websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising.For more information, please review your cookie preferences options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.