Configure an observable microservice with Appsody, OpenShift, and Open Liberty

The Appsody Operator, which works with OpenShift, enables you to quickly and easily deploy various runtime templates. These templates create a simple base project workspace for you to start developing on.

One of these templates is the Open Liberty-powered Java MicroProfile Appsody application stack that leverages Eclipse MicroProfile 3.0 technologies for developing microservices. This is a great foundation for developing an observable microservice.

Why MicroProfile? The Eclipse MicroProfile specification already provides important observability features such as MicroProfile Metrics and MicroProfile Health. The MicroProfile Health feature allows services to report their readiness and liveness statuses through two respective endpoints. The MicroProfile Metrics feature allows the runtime to track and expose metrics for monitoring through an endpoint.

In this tutorial, we show you how to customize your application deployment and introduce various monitoring tools for consuming and visualizing your health and metrics data. Additionally, we show you how to leverage your Open Liberty runtime’s JSON logging ability to visualize logging data using Kibana.

Prerequisites

To complete the steps in this tutorial, you need to:

  • Install Appsody
  • Log in to a Docker registry
  • Log in to your OpenShift cluster
  • Have the following stacks deployed on an OpenShift cluster:
    • Prometheus and Grafana stack. Read the documentation for instructions on how to deploy this stack to an OpenShift cluster.
    • Elasticsearch, Fluentd, Kibana (EFK) stack. Read the documentation for instructions on how to deploy the EFK stack on an OpenShift cluster.

To use Prometheus to securely scrape metrics data from Open Liberty, your development and operations teams need to work together to configure the authentication credentials. More information regarding this topic can be found in the following sections Configure Open Liberty Security.

This tutorial was tested with Kabanero 0.2.0 and OpenShift Cluster 3.11.

Customize and deploy the Java MicroProfile stack

On your local system, create an empty directory that will serve as your project directory. Appsody will use the name of this directory as the name of your application.

In your empty project directory, initialize the Java MicroProfile stack by calling:

appsody init java-microprofile

The Java MicroProfile template is now deployed into your current directory You can now start to customize the code.

From here we can start customizing. For more information regarding the MicroProfile stack see the Appsody stack Github page.

On your OpenShift cluster, you need to create a project namespace for your Appsody application stack to be deployed in. The following code shows you how to create this namespace; in our example, appsody-application is used as the project namespace.

oc new-project appsody-application

Configure Open Liberty security

The Java MicroProfile Appsody stack is already configured with basic authentication, using the <quickStartSecurity> element. The default username is admin and the default password is adminpwd.

Configure your own username and password values by modifying the <quickStartSecurity> attributes in the server.xml found under <appsody_project_directory>/src/main/liberty/config.

This authentication configuration affects how the operations team configures Prometheus to scrape data from the Open Liberty runtime. The operations team will create and apply a secret containing the user name and password. The Prometheus Service Monitor will leverage the secret to authenticate itself when scraping the /metric endpoint of the Open Liberty runtime.

The operations team may have already designated a username and password. If so, substitute that username and password into the server.xml configuration. Alternatively, as a developer you can deploy your own Service Monitor that the Prometheus deployment can pull data from. See Deployment with Service Monitor section.

The following code listings shows the server.xml with authentication specified:

<server description="Liberty server">
    <featureManager>
        <feature>microProfile-3.0</feature>
    </featureManager>

    <quickStartSecurity userName="admin" userPassword="adminpwd"/>
    <keyStore id="defaultKeyStore" location="key.jks" type="jks" password="mpKeystore"/>
    <httpEndpoint host="*" httpPort="${default.http.port}" httpsPort="${default.https.port}" id="defaultHttpEndpoint"/>

    <webApplication location="starter-app.war" contextRoot="/"/>
</server>

Enable Open Liberty JSON logging

The Open Liberty runtime is capable of emitting logging events into standard-out/console in JSON format. This allows powerful monitoring stacks such as Elasticsearch, Fluentd and Kibana (EFK) to consume, store, and visualize the data more effectively.

To enable Open Liberty’s JSON logging capabilities, modify the pom.xml to generate a bootstrap.properties files with the desired configuration values.

For example, change your code from:

...
    <bootstrapProperties>
        <default.http.port>${http.port}</default.http.port>
        <default.https.port>${https.port}</default.https.port>
        <app.context.root>${app.name}</app.context.root>
    </bootstrapProperties>
...

to:

...
    <bootstrapProperties>
        <default.http.port>${http.port}</default.http.port>
        <default.https.port>${https.port}</default.https.port>
        <app.context.root>${app.name}</app.context.root>
        <com.ibm.ws.logging.console.format>json</com.ibm.ws.logging.console.format>
        <com.ibm.ws.logging.console.source>message,trace,accessLog,ffdc,audit</com.ibm.ws.logging.console.source>
        <com.ibm.ws.logging.console.log.level>info</com.ibm.ws.logging.console.log.level>
        <com.ibm.ws.logging.message.format>json</com.ibm.ws.logging.message.format>
        <com.ibm.ws.logging.message.source></com.ibm.ws.logging.message.source>
        <com.ibm.ws.logging.trace.file.name>stdout</com.ibm.ws.logging.trace.file.name>
    </bootstrapProperties>
...

When your server starts, the Open Liberty runtime interprets these values and all subsequent logs emitted to console will consist of the sources defined by the environment variables. Additionally, the settings defined in the snippet disable output to messages.log and traces.log.

See Analyzing Open Liberty logs for next steps.

See the Open Liberty logging documentation for more information regarding the configuration of Open Liberty’s logging capabilities.

Enable Open Liberty metrics

When both monitor-1.0 and mpMetrics-x.x features are configured, additional metrics are tracked by the Open Liberty runtime. The microProfile-3.0 feature will start up the mpMetrics-2.0 feature.

Configure the monitor-1.0 feature into the <appsody_project_directory>/src/main/liberty/config/server.xml by adding:

server.xml snippet:

<featureManager>
   <feature>microProfile-3.0</feature>
   <feature>monitor-1.0</feature>
</featureManager>

You can first test your Appsody application locally by calling:

appsody run

You can view your metrics on the /metrics endpoint by going to http://localhost:9080/metrics. When prompted for authentication credentials, use the user name and password you configured above.

Deploy your application to OpenShift

Now that your Appsody application is complete, make sure you are logged into your Docker repository and then deploy the application to your OpenShift cluster using the following command:

appsody deploy -t demo-repo/java-microprofile-demo:latest --push --namespace appsody-application

What’s happening in the code? Let’s take a quick look:

  • The -t tags our image.
  • The --push pushes the image to an external Docker registry.
  • The --namespace tells the OpenShift cluster that we want to deploy this Appsody application under the specified namespace.
  • demo-repo is the sample repository name. Please substitute to your appropriate repository name.
  • appsody-application is the project namespace. Please substitute your appropriate project namespace.

As part of the deployment process, the Appsody CLI checks if an Appsody Operator is already deployed in the namespace and deploys it if necessary. The deployment process then generates a deployment manifest of your Appsody application suited for that operator and applies it. Your application is now deployed onto the OpenShift cluster.

A file named app-deploy.yaml is also generated in your local project directory. This is the yaml file that is deployed onto your OpenShift cluster. You can further modify this file with extra configuration and reapply it by executing:

oc apply -f app-deploy.yaml

A Service Monitor created by the operations teams will be configured to monitor a deployment with specific labels. Communicate with your operations team to identify what this label key-value is. You will need to apply these labels to your app-deploy.yaml and redeploy it.

For example if the Service Monitor is watching for label app with the value demo:

metadata:
  labels: 
    app: demo

Alternatively, you may deploy your own service monitor through the Appsody operator where it will handle label matching for you. See the following section.

Deployment with Service Monitor

As an additional step, you can deploy a Service Monitor into your OpenShift cluster by modifying the app-deploy.yaml and redeploying it. For developers, this gives you more direct control over connecting your application deployment with Prometheus. Instead of waiting for a member of the operations team to configure a Service Monitor, you can do it yourself.

Add the following configuration:

  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ""

The Prometheus deployment may monitor Service Monitors with specific labels. In this example, the Prometheus deployment needs to monitor for Service Monitors with the k8s-app label. Additionally, the Prometheus deployment may only monitor namespaces with certain labels.

You need to communicate with your operations team to see what label is needed so that your Service Monitor and namespace gets picked up.

The basicAuth section defines what username and password you should use for authentication when accessing the /metrics endpoint.

In this example, metrics-liberty is a reference to a secret named metrics-liberty that contains the encrypted user name and password values. Either the developer or the operations team can create this secret. The secret needs to be created in the same project namespace as the application deployment and service monitor. See Configure Open Liberty Security to review how to set up authentication security for the underlying Open Liberty runtime.

The following code shows the app-deploy.yaml with the monitoring section.

apiVersion: appsody.dev/v1beta1
kind: AppsodyApplication
metadata:
  name: myAppsodyApplication
spec:
  # Add fields here
  version: 1.0.0
  applicationImage: demo-repo/java-microprofile-demo:latest
  stack: java-microprofile
  service:
    type: NodePort
    port: 9080
    annotations:
      prometheus.io/scrape: 'true'
  readinessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/ready
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  livenessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/live
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ""
  expose: true
  createKnativeService: false

Analyze the Open Liberty logs

View logs using Kibana dashboards

Now that the Open Liberty runtime is emitting JSON-formatted logs, we can leverage the EFK stack to help us monitor these logging events. Fluentd collects the JSON data and sends it to Elasticsearch for storage and indexing. Kibana then visualizes the data.

Kibana dashboards are provided for visualizing events from the Open Liberty runtime. Retrieve available Kibana dashboards built for analyzing Liberty logging events here.

Note: To use these dashboards, logging events must be emitted in JSON format to the standard output. If you have not already configured the Open Liberty runtime to do so, please see Enable Open Liberty JSON Logging

View logs from the command line

To view logs from the command line, use the oc logs command as follows:

oc logs -f pod_name -n namespace

where pod_name is the name of your Open Liberty pod and namespace is the namespace your pod is running in.

You can use command-line JSON parsers, like JSON Query tool (jq), to create human-readable views of JSON-formatted logs. In the following example, the logs are piped through grep to ensure that the message field is there before jq parses the line:

oc logs -f pod_name -n namespace | \
  grep --line-buffered message | \
  jq .message -r

Monitor the health of your Java MicroProfile Appsody stack

MicroProfile Health allows services to report their readiness and liveness status, and it publishes the overall health status to defined endpoints. If a service reports UP, then it’s available. If the service reports DOWN, then it’s unavailable. MicroProfile Health reports an individual service status at the endpoint and indicates the overall status as UP if all the services are UP. A service orchestrator can then use the health statuses to make decisions.

Health data is available on the /health/live and /health/ready endpoints for the liveness checks and for the readiness checks, respectively.

Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests. MicroProfile Health exposes readiness and liveness endpoints on your microservices, as described above, where Kubernetes will poll these endpoints as specified by the probes to react appropriately to any change in the microservice’s status.

These Kubernetes liveness and readiness probes are already pre-configured to the respective MicroProfile Health endpoints in the Appsody Operator and the MicroProfile Appsody stack configuration files, as follows.

You can read more information about Kubernetes liveness and readiness configuration here.

Monitor the metrics of your MicroProfile Appsody stack

A MicroProfile Metrics-enabled Open Liberty runtime is capable of tracking and observing metrics from the JVM and Open Liberty server, as well as tracking metrics instrumented within the deployed application. Metrics data is available on the /metrics endpoint. The tracked metrics data can then be scraped by Prometheus and visualized with Grafana.

There are IBM-provided Grafana dashboards that leverage the metrics tracked from the JVM as well as the Open Liberty runtime. Find the appropriate dashboards here.

Summary

Using the Open Liberty-powered Java MicroProfile Appsody stack, we’ve now configured a microservice that leverages both MicroProfile Health and MicroProfile Metrics along with Liberty’s JSON logging for greater observability in combination with a variety of monitoring tools. We’ve integrated with powerful monitoring tools such as Elasticsearch, Fluentd and Kibana to retrieve, store and visualize logging data. We have also used Prometheus and Grafana to help retrieve, store and visualize metric data.

Next steps

David Chan
Prashanth Gunapalasingam