Beyond the 12 factors: 15-factor cloud-native Java applications

The original 12-factor app methodology acts as fantastic introductory guidelines to building and deploying applications in the cloud. If you’d like to learn more about this methodology, check out Creating cloud-native applications: 12-factor applications. However, this methodology was created almost a decade ago, and cloud technologies have advanced since their original creation. In order to enable our applications to really take advantage of modern cloud infrastructure and tooling — and thrive in the cloud, rather than just survive in the cloud — the original 12 factors were revised, and three additional factors were added. This all led to the creation of a new 15-factor app methodology created by Kevin Hoffman: Beyond the Twelve-Factor App.

The new and revised 15 factors

  1. One codebase, one application
  2. API first
  3. Dependency management
  4. Design, build, release, and run
  5. Configuration, credentials, and code
  6. Logs
  7. Disposability
  8. Backing services
  9. Environment parity
  10. Administrative processes
  11. Port binding
  12. Stateless processes
  13. Concurrency
  14. Telemetry
  15. Authentication and authorization

The additional factors and why they are important

This new 15-factor app methodology is loosely based around the original 12 factors (with additional revisions to each of them), but the most significant difference is the inclusion of three new factors:

  1. API first
  2. Telemetry
  3. Authentication and authorization

API first was introduced as a factor to place emphasis on the importance of APIs within cloud-native application development. Applications that are developed for the cloud are usually a participant in an ecosystem of distributed services and if APIs are not clearly defined, this can lead to a nightmare of integration failures. Hence, the importance of this factor in designing applications that thrive in the cloud.

Telemetry is another important addition. You may question why telemetry is needed as its own factor in addition to the logging factor, which was already included in the original 12-factor app methodology. Although logging is an important element to building cloud-native applications, it is generally a tool used during development to diagnose errors and code flows. Logging is typically oriented around the internal structure of your app, rather than reflecting real-world customer usage. Telemetry, on the other hand, is focused on data collection once the app is released into the wild. Telemetry and real-time app monitoring enable developers to monitor their application’s performance, health, and key metrics in this complicated and highly distributed environment.

The addition of the Authentication and authorization factor adds in an important emphasis on security for cloud-native applications. Deploying applications in a cloud environment means that applications can be transported across many data centers worldwide, executed within multiple containers, and accessed by an almost unlimited number of clients. So, it’s vital that security is not an afterthought for cloud-native applications and is a very important factor to consider.

Let’s look at these factors in more detail and investigate how we can enable them in our own applications.

API first

Image shows API-first diagram

Note: Some of the factors in these methodologies do not necessarily map to specific physical requirements imposed by the cloud but instead relate more to the habits of people and organizations when building cloud-native apps.

When developing an enterprise application for the cloud, the application often becomes a participant in an ecosystem of services. But if APIs are not clearly defined within an application, this can lead to integration failures within this ecosystem. This is precisely what this factor is designed to help mitigate.

The addition of this factor helps to formally recognize APIs as a first-class artifact of the development process. An API-first approach involves developing APIs that are consistent and reusable, giving teams the ability to work against each other’s public contracts without interfering with the internal development processes. By utilizing an API-first approach and clearly planning the various APIs that will be consumed by client applications and services, each API can be very clearly designed to be as effective as possible and can be easily mocked up. This enables greater collaboration with stakeholders and enables developers and architects to test or vet their direction/plans before investing too much into supporting a given API. The clear design process for each API also enables more effective documentation to be created for each. Providing documentation that is well-designed, comprehensive, and easy to follow is crucial when it comes to ensuring developers have a great experience with the API.

An API description language can be helpful in establishing a contract for how the API is supposed to behave. API description languages are domain-specific languages, which are suited for describing APIs. They are intuitive languages that can be easily written, read and understood by API developers, API designers, and API architects. Compared to programming languages or API implementation languages, API description languages use a higher level of abstraction and a declarative paradigm, which means that they can be used to help express the what rather than the how – they can help define the data structure of the possible responses (the what), instead of describing how the response is computed. During the initial design phases of an API, this can be especially helpful for mocking up an API and gathering feedback from stakeholders. Examples of API description languages include OpenAPI, Swagger, and RAML.

To clearly define an API in the source code of an application, standard models from API specifications can be used. An API specification can provide a broad understanding of how an API behaves and how a particular API links with other APIs. It explains how the API functions and the results to expect when using it. One example of an API specification is the OpenAPI Specification. The OpenAPI v3 specification defines a standard language-agnostic interface for describing REST APIs, which allows documentation to be generated from the APIs themselves. The MicroProfile specification builds upon this standard with its OpenAPI 1.0 component, which provides a set of Java interfaces and programming models that allow Java developers to natively produce OpenAPI v3 documents from their JAX-RS applications.

If you’re interested in learning more about how you could make use of MicroProfile OpenAPI within your own application, check out our Open Liberty blog post Introducing MicroProfile OpenAPI 1.0.

Telemetry

Image show telemetry diagram

The use of telemetry should be an essential part of any cloud-native application. Previously building applications locally enabled developers to inspect the inside of an application, execute a debugger, and perform hundreds of other tasks that give visibility deep within an app and its behavior relatively easily. However, applications running in the cloud do not have this kind of direct access, and your app instance might move anywhere across the world with little or no warning. In addition to this, you may start with only one instance of your app, and a few minutes later, you might have hundreds of copies of your application running. These are all incredibly powerful, useful features, but they present an unfamiliar pattern for real-time application monitoring and telemetry.

Telemetry can include domain-specific metrics (those needed or required by your specific organization, department, or team), as well as health and system metrics for your application. Health and system metrics include application start, shutdown, scaling, web request tracing, and the results of periodic health checks. MicroProfile Health and MicroProfile Metrics are fantastic open cloud-native Java tools that can be utilized to collect these sorts of metrics.

If you’re interested in trying out MicroProfile Health or MicroProfile Metrics then try out our developer guides:

The Open Liberty Operator can also be used to help improve telemetry for Kubernetes deployments.

To find out more about how you could make us of and integrate these telemetry tools into your own applications, have a look at the following Open Liberty content:

Authentication and authorization

Image shows authentication and authorization diagram

Security is a vital part of any application and cloud environment. Regardless of whether that app is destined for an enterprise, a mobile device, or the cloud, security should never be an afterthought. Cloud-native applications can secure their endpoints with role-based access control (RBAC). These roles dictate whether the calling client has sufficient permission for the application to honor the request and helps track who is making the request for audit purposes.

MicroProfile JWT (JSON Web Token), a token-based authentication mechanism to authenticate, authorize, and verify users, can be a useful open source Java API to enable this factor. A JSON Web Token (JWT) is a self-contained token designed to securely transmit information as a JSON object. The information in this JSON object is digitally signed and can be trusted and verified by the recipient. For microservices, a token-based authentication mechanism offers a lightweight way for security controls and security tokens to propagate user identities across different services. JWT is becoming the most common token format because it follows well-defined and known standards. MicroProfile JWT standards define the required format of JWT for authentication and authorization. The standards also map JWT claims to various Jakarta EE container APIs and make the set of claims available through getter methods.

To find out more and get hands-on with this technology, have a look at the following Open Liberty content:

In addition to this, there are also cloud-specific authentication tools that can be used, such as Azure Active Directory. See our Open Liberty blog post, Securing Open Liberty applications with Azure Active Directory using OpenID Connect to find out how you can use this with your Open Liberty applications.

Next steps

If you haven’t already seen our 12-factor app methodology article, then this is a great place to look to understand how you can achieve the other factors within the 15 factor app methodology.