Configure an observable microservice with Appsody, OpenShift, and Open Liberty

Editor’s note: This tutorial was updated on April 9, 2020 to work with Kabanero 0.6.0 and Red Hat OpenShift Container Platform 4.3

The Appsody Operator, which works with OpenShift, enables you to quickly and easily deploy various runtime templates. These templates create a simple base project workspace for you to start developing on.

One of these templates is the Open Liberty-powered Java Open Liberty Appsody application stack that leverages Eclipse MicroProfile 3.2 technologies for developing microservices. This is a great foundation for developing an observable microservice.

Why MicroProfile? The Eclipse MicroProfile specification already provides important observability features such as MicroProfile Metrics and MicroProfile Health. The MicroProfile Health feature allows services to report their readiness and liveness statuses through two respective endpoints. The MicroProfile Metrics feature allows the runtime to track and expose metrics for monitoring through an endpoint.

In this tutorial, we show you how to customize your application deployment and introduce various monitoring tools for consuming and visualizing your health and metrics data. Additionally, we show you how to leverage your Open Liberty runtime’s JSON logging ability to visualize logging data using Kibana.

Prerequisites

To complete the steps in this tutorial, you need to:

  • Install Appsody
  • Log in to a Docker registry
  • Log in to your OpenShift cluster
  • Have the following stacks deployed on an OpenShift cluster:
    • Prometheus and Grafana stack. Read the documentation for instructions on how to deploy this stack to an OpenShift cluster.
    • Elasticsearch, Fluentd, Kibana (EFK) stack. Read the documentation for instructions on how to deploy the EFK stack on an OpenShift cluster.

To use Prometheus to securely scrape metrics data from Open Liberty, your development and operations teams need to work together to configure the authentication credentials. More information regarding this topic can be found in the following section Configure Open Liberty Security.

Customize and deploy the Java Open Liberty stack

On your local system, create an empty directory that will serve as your project directory. Appsody will use the name of this directory as the name of your application.

In your empty project directory, initialize the Java Open Liberty stack by calling:

appsody init java-openliberty

The Java Open Liberty template is now deployed into your current directory. You can now start to customize the code.

For more information regarding the java-openliberty stack see the Appsody stack Github page.

On your OpenShift cluster, you need to create a project namespace for where you will deploy your Appsody application stack. The following code shows you how to create this namespace. In our example, appsody-application is used as the project namespace.

oc new-project appsody-application

Configure Open Liberty security

The Java Open Liberty Appsody stack is configured with basic authentication for local development, using the <quickStartSecurity> element in the quick-start-security.xml configuration found under <appsody_project_directory>/src/main/liberty/config/configDropins/defaults. The default username is admin and the password is generated at startup.

The quick-start-security.xml configuration is only for prototyping in local development, so add the <quickStartSecurity> element in the server.xml configuration found under <appsody_project_directory>/src/main/liberty/config for deployment to OpenShift.

The operations team may have already designated a username and password. If so, substitute that username and password by modifying the <quickStartSecurity> attributes in the server.xml configuration. You can also configure your own username and password values.

The following code shows the server.xml with authentication specified:

<server description="Liberty server">
    <featureManager>
        <feature>microProfile-3.2</feature>
    </featureManager>

    <quickStartSecurity userName="admin" userPassword="adminpwd"/>

    <httpEndpoint host="*" httpPort="${default.http.port}" httpsPort="${default.https.port}" id="defaultHttpEndpoint"/>

    <webApplication location="starter-app.war" contextRoot="/"/>
</server>

Enable Open Liberty JSON logging

The Open Liberty runtime is capable of emitting logging events into standard-out/console in JSON format. This allows powerful monitoring stacks such as Elasticsearch, Fluentd and Kibana (EFK) to consume, store, and visualize the data more effectively.

To enable Open Liberty’s JSON logging capabilities, modify the pom.xml to generate a bootstrap.properties files with the desired configuration values.

For example, change your code from:

... 
    <bootstrapProperties>
       <default.http.port>${http.port}</default.http.port>
       <default.https.port>${https.port}</default.https.port>
       <app.context.root>${app.name}</app.context.root>
    </bootstrapProperties>
...

to:

... 
    <bootstrapProperties>
       <default.http.port>${http.port}</default.http.port>
       <default.https.port>${https.port}</default.https.port>
       <app.context.root>${app.name}</app.context.root>
       <com.ibm.ws.logging.console.format>json</com.ibm.ws.logging.console.format>
       <com.ibm.ws.logging.console.source>message,trace,accessLog,ffdc,audit</com.ibm.ws.logging.console.source>
       <com.ibm.ws.logging.console.log.level>info</com.ibm.ws.logging.console.log.level>
       <com.ibm.ws.logging.message.format>json</com.ibm.ws.logging.message.format>
       <com.ibm.ws.logging.message.source></com.ibm.ws.logging.message.source>
       <com.ibm.ws.logging.trace.file.name>stdout</com.ibm.ws.logging.trace.file.name>
    </bootstrapProperties>
...

When your server starts, the Open Liberty runtime interprets these values and all subsequent logs emitted to console will consist of the sources defined by the environment variables. Additionally, the settings defined in the snippet disable output to messages.log and trace.log.

Additionally, one of the Kibana dashboards used in this guide requires the accessLogging feature to be enabled in the Open Liberty server configuration to visualize inbound client requests handled by HTTP endpoints. To do this, add the following to the server.xml configuration found under <appsody_project_directory>/src/main/liberty/config:

... 
    <httpEndpoint id="defaultHttpEndpoint" host="*" httpPort="${default.http.port}" httpsPort="${default.https.port}">
        <accessLogging logFormat='%{R}W %h %u %t "%r" %s %b %D %{User-agent}i'/>
    </httpEndpoint>
...

See Analyzing Open Liberty logs for next steps.

See the Open Liberty logging documentation for more information regarding the configuration of Open Liberty’s logging capabilities.

Enable Open Liberty metrics

When both monitor-1.0 and mpMetrics-x.x features are configured, the Open Liberty runtime tracks additional metrics. The microProfile-3.2 feature will start up the mpMetrics-2.2 feature.

Configure the monitor-1.0 feature into the <appsody_project_directory>/src/main/liberty/config/server.xml by adding:

server.xml snippet:

<featureManager>
   <feature>microProfile-3.2</feature>
   <feature>monitor-1.0</feature>
</featureManager>

You can first test your Appsody application locally by calling:

appsody run

You can view your metrics on the /metrics endpoint by going to http://localhost:9080/metrics. When prompted for authentication credentials, use the user name and password you configured above.

Deploy your application to OpenShift

Now that your Appsody application is complete, make sure you are logged into your Docker repository and then deploy the application to your OpenShift cluster using the following command:

appsody deploy -t demo-repo/java-openliberty-demo:latest --push --namespace appsody-application

What’s happening in the code? Let’s take a quick look:

  • The -t tags our image.
  • The --push pushes the image to an external Docker registry.
  • The --namespace tells the OpenShift cluster that we want to deploy this Appsody application under the specified namespace.
  • demo-repo is the sample repository name. Please substitute to your appropriate repository name.
  • appsody-application is the project namespace. Please substitute your appropriate project namespace.

As part of the deployment process, the Appsody CLI checks if an Appsody Operator is already deployed in the namespace and deploys it if necessary. The deployment process then generates a deployment manifest of your Appsody application suited for that operator and applies it. Your application is now deployed onto the OpenShift cluster.

A file named app-deploy.yaml is also generated in your local project directory. This is the yaml file that is deployed onto your OpenShift cluster. You can further modify this file with extra configuration and reapply it by executing:

oc apply -f app-deploy.yaml

Deploy your own Service Monitor through the Appsody operator. See the following section, Deployment with Service Monitor, on how to deploy a service monitor through the Appsody Operator.

If desired, the operations team can also create a Service Monitor to monitor a deployment with specific labels. Communicate with your operations team to identify what this label key-value is. If you choose to deploy the service monitor yourself through the Appsody operator, label matching will be handled for you. You will need to apply these labels to your app-deploy.yaml and redeploy it.

For example if the Service Monitor is watching for label app with the value demo:

metadata:
  labels: 
    app: demo

Deployment with Service Monitor

You can deploy a Service Monitor into your OpenShift cluster by modifying the app-deploy.yaml and redeploying it. For developers, this gives you more direct control over connecting your application deployment with Prometheus.

Add the following configuration:

  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ''

The Prometheus deployment may monitor Service Monitors with specific labels. In this example, the Prometheus deployment needs to monitor for Service Monitors with the k8s-app label. Additionally, the Prometheus deployment may only monitor namespaces with certain labels.

You need to communicate with your operations team to see what label is needed so that your Service Monitor and namespace gets picked up.

The basicAuth section defines what username and password you should use for authentication when accessing the /metrics endpoint.

In this example, metrics-liberty is a reference to a secret named metrics-liberty that contains the encrypted user name and password values. Either the developer or the operations team can create this secret. The secret needs to be created in the same project namespace as the application deployment and service monitor. See Configure Open Liberty Security to review how to set up authentication security for the underlying Open Liberty runtime.

The following code shows the app-deploy.yaml with the monitoring section.

apiVersion: appsody.dev/v1beta1
kind: AppsodyApplication
metadata:
  name: myAppsodyApplication
spec:
  # Add fields here
  version: 1.0.0
  applicationImage: demo-repo/java-openliberty-demo:latest
  stack: java-openliberty
  service:
    type: NodePort
    port: 9080
    annotations:
      prometheus.io/scrape: 'true'
  readinessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/ready
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  livenessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/live
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ''
  expose: true
  createKnativeService: false

Analyze the Open Liberty logs

View logs using Kibana dashboards

Now that the Open Liberty runtime is emitting JSON-formatted logs, we can leverage the EFK stack to help us monitor these logging events. Fluentd collects the JSON data and sends it to Elasticsearch for storage and indexing. Kibana then visualizes the data.

Kibana dashboards are provided for visualizing events from the Open Liberty runtime. Retrieve available Kibana dashboards built for analyzing Liberty logging events here.

Note: To use these dashboards, logging events must be emitted in JSON format to the standard out. If you have not already configured the Open Liberty runtime to do so, please see Enable Open Liberty JSON Logging

Importing Kibana dashboards

To import the Kibana dashboards, complete the following steps:

  1. In your OpenShift Container Platform web console, navigate to Monitoring > Logging to access Kibana.

    Access the route exposed for Kibana

  2. In Kibana, under Management > Saved Objects, click Import to browse through your filesystem for your desired dashboard.

    Import your dashboard

  3. You can view your imported dashboards under the Dashboards tab in the sidebar.

    Import your dashboard

  4. Click on your imported dashboard to see your log data visualized.

    See your log data visualized

View logs from the command line

To view logs from the command line, use the oc logs command as follows:

oc logs -f pod_name -n namespace

where pod_name is the name of your Open Liberty pod and namespace is the namespace your pod is running in.

You can use command-line JSON parsers, like JSON Query tool (jq), to create human-readable views of JSON-formatted logs. In the following example, the logs are piped through grep to ensure that the message field is there before jq parses the line:

oc logs -f pod_name -n namespace | \
  grep --line-buffered message | \
  jq .message -r

Monitor the health of your Java Open Liberty Appsody stack

MicroProfile Health allows services to report their readiness and liveness status, and it publishes the overall health status to defined endpoints. If a service reports UP, then it’s available. If the service reports DOWN, then it’s unavailable. MicroProfile Health reports an individual service status at the endpoint and indicates the overall status as UP if all the services are UP. A service orchestrator can then use the health statuses to make decisions.

Health data is available on the /health/live and /health/ready endpoints for the liveness checks and for the readiness checks, respectively.

Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests. MicroProfile Health exposes readiness and liveness endpoints on your microservices, as described above. Kubernetes will poll these endpoints, as specified by the probes, and react appropriately to any change in the microservice’s status.

These Kubernetes liveness and readiness probes are already pre-configured to the respective MicroProfile Health endpoints in the Appsody Operator and the Open Liberty Appsody stack configuration files, as follows.

You can read more information about Kubernetes liveness and readiness configuration here.

Monitor the metrics of your Open Liberty Appsody stack

A MicroProfile Metrics-enabled Open Liberty runtime is capable of tracking and observing metrics from the JVM and Open Liberty server, as well as tracking metrics instrumented within the deployed application. Metrics data is available on the /metrics endpoint. The tracked metrics data can then be scraped by Prometheus and visualized with Grafana.

There are IBM-provided Grafana dashboards that leverage the metrics tracked from the JVM as well as the Open Liberty runtime. Find the appropriate dashboards here.

Importing Grafana dashboards using Grafana Operator

In your OpenShift Container Platform web console, go to your project that has the Prometheus/Grafana operator stack installed and navigate to Installed Operators.

Under the installed Grafana Operator, click on Grafana Dashboard.

View the Grafana dashboard

Here, you can see your existing dashboards. To start a new Grafana dashboard, click “Create Grafana Dashboard”.

Create the Grafana dashboard

Under the JSON definition in the yaml file, remove the pre-existing content and copy in your desired dashboard. Click Create to finish.

Copy in your desired dashboard

To see your dashboards visualized in Grafana, navigate to Networking > Routes and access the route exposed for Grafana.

Copy in your desired dashboard

Summary

Using the Java Open Liberty Appsody stack, we’ve now configured a microservice that uses both MicroProfile Health and MicroProfile Metrics along with Liberty’s JSON logging for greater observability in combination with a variety of monitoring tools. We’ve integrated with powerful monitoring tools such as Elasticsearch, Fluentd and Kibana to retrieve, store and visualize logging data. We have also used Prometheus and Grafana to help retrieve, store and visualize metric data.

Next steps