Configure an observable microservice with Appsody, OpenShift, and Open Liberty

Editor’s note: This tutorial was updated on 12 March 2020 to work with Kabanero 0.3.0 and Red Hat OpenShift Container Platform 4.2

The Appsody Operator, which works with OpenShift, enables you to quickly and easily deploy various runtime templates. These templates create a simple base project workspace for you to start developing on.

One of these templates is the Open Liberty-powered Java MicroProfile Appsody application stack that leverages Eclipse MicroProfile 3.0 technologies for developing microservices. This is a great foundation for developing an observable microservice.

Why MicroProfile? The Eclipse MicroProfile specification already provides important observability features such as MicroProfile Metrics and MicroProfile Health. The MicroProfile Health feature allows services to report their readiness and liveness statuses through two respective endpoints. The MicroProfile Metrics feature allows the runtime to track and expose metrics for monitoring through an endpoint.

In this tutorial, we show you how to customize your application deployment and introduce various monitoring tools for consuming and visualizing your health and metrics data. Additionally, we show you how to leverage your Open Liberty runtime’s JSON logging ability to visualize logging data using Kibana.

Prerequisites

To complete the steps in this tutorial, you need to:

  • Install Appsody
  • Log in to a Docker registry
  • Log in to your OpenShift cluster
  • Have the following stacks deployed on an OpenShift cluster:
    • Prometheus and Grafana stack. Read the documentation for instructions on how to deploy this stack to an OpenShift cluster.
    • Elasticsearch, Fluentd, Kibana (EFK) stack. Read the documentation for instructions on how to deploy the EFK stack on an OpenShift cluster.

To use Prometheus to securely scrape metrics data from Open Liberty, your development and operations teams need to work together to configure the authentication credentials. More information regarding this topic can be found in the following sections Configure Open Liberty Security.

Customize and deploy the Java MicroProfile stack

On your local system, create an empty directory that will serve as your project directory. Appsody will use the name of this directory as the name of your application.

In your empty project directory, initialize the Java MicroProfile stack by calling:

appsody init java-microprofile

The Java MicroProfile template is now deployed into your current directory. You can now start to customize the code.

For more information regarding the MicroProfile stack see the Appsody stack Github page.

On your OpenShift cluster, you need to create a project namespace for where you will deploy your Appsody application stack. The following code shows you how to create this namespace; in our example, appsody-application is used as the project namespace.

oc new-project appsody-application

Configure Open Liberty security

The Java MicroProfile Appsody stack is configured with basic authentication for local development, using the <quickStartSecurity> element in the quick-start-security.xml configuration found under <appsody_project_directory>/src/main/liberty/config/configDropins/defaults. The default username is admin and the default password is adminpwd.

The quick-start-security.xml configuration is only for local development, so add the <quickStartSecurity> element in the server.xml configuration found under <appsody_project_directory>/src/main/liberty/config for production.

The operations team may have already designated a username and password. If so, substitute that username and password by modifying the <quickStartSecurity> attributes in the server.xml configuration. You can also configure your own username and password values.

The following code shows the server.xml with authentication specified:

<server description="Liberty server">
    <featureManager>
        <feature>microProfile-3.0</feature>
    </featureManager>

    <quickStartSecurity userName="admin" userPassword="adminpwd"/>
    <keyStore id="defaultKeyStore" location="key.jks" type="jks" password="mpKeystore"/>
    <httpEndpoint host="*" httpPort="${default.http.port}" httpsPort="${default.https.port}" id="defaultHttpEndpoint"/>

    <webApplication location="starter-app.war" contextRoot="/"/>
</server>

Enable Open Liberty JSON logging

The Open Liberty runtime is capable of emitting logging events into standard-out/console in JSON format. This allows powerful monitoring stacks such as Elasticsearch, Fluentd and Kibana (EFK) to consume, store, and visualize the data more effectively.

To enable Open Liberty’s JSON logging capabilities, modify the pom.xml to generate a bootstrap.properties files with the desired configuration values.

For example, change your code from:

...
    <bootstrapProperties>
        <default.http.port>${http.port}</default.http.port>
        <default.https.port>${https.port}</default.https.port>
        <app.context.root>${app.name}</app.context.root>
    </bootstrapProperties>
...

to:

...
    <bootstrapProperties>
        <default.http.port>${http.port}</default.http.port>
        <default.https.port>${https.port}</default.https.port>
        <app.context.root>${app.name}</app.context.root>
        <com.ibm.ws.logging.console.format>json</com.ibm.ws.logging.console.format>
        <com.ibm.ws.logging.console.source>message,trace,accessLog,ffdc,audit</com.ibm.ws.logging.console.source>
        <com.ibm.ws.logging.console.log.level>info</com.ibm.ws.logging.console.log.level>
        <com.ibm.ws.logging.message.format>json</com.ibm.ws.logging.message.format>
        <com.ibm.ws.logging.message.source></com.ibm.ws.logging.message.source>
        <com.ibm.ws.logging.trace.file.name>stdout</com.ibm.ws.logging.trace.file.name>
    </bootstrapProperties>
...

When your server starts, the Open Liberty runtime interprets these values and all subsequent logs emitted to console will consist of the sources defined by the environment variables. Additionally, the settings defined in the snippet disable output to messages.log and traces.log.

See Analyzing Open Liberty logs for next steps.

See the Open Liberty logging documentation for more information regarding the configuration of Open Liberty’s logging capabilities.

Enable Open Liberty metrics

When both monitor-1.0 and mpMetrics-x.x features are configured, additional metrics are tracked by the Open Liberty runtime. The microProfile-3.0 feature will start up the mpMetrics-2.0 feature.

Configure the monitor-1.0 feature into the <appsody_project_directory>/src/main/liberty/config/server.xml by adding:

server.xml snippet:

<featureManager>
   <feature>microProfile-3.0</feature>
   <feature>monitor-1.0</feature>
</featureManager>

You can first test your Appsody application locally by calling:

appsody run

You can view your metrics on the /metrics endpoint by going to http://localhost:9080/metrics. When prompted for authentication credentials, use the user name and password you configured above.

Deploy your application to OpenShift

Now that your Appsody application is complete, make sure you are logged into your Docker repository and then deploy the application to your OpenShift cluster using the following command:

appsody deploy -t demo-repo/java-microprofile-demo:latest --push --namespace appsody-application

What’s happening in the code? Let’s take a quick look:

  • The -t tags our image.
  • The --push pushes the image to an external Docker registry.
  • The --namespace tells the OpenShift cluster that we want to deploy this Appsody application under the specified namespace.
  • demo-repo is the sample repository name. Please substitute to your appropriate repository name.
  • appsody-application is the project namespace. Please substitute your appropriate project namespace.

As part of the deployment process, the Appsody CLI checks if an Appsody Operator is already deployed in the namespace and deploys it if necessary. The deployment process then generates a deployment manifest of your Appsody application suited for that operator and applies it. Your application is now deployed onto the OpenShift cluster.

A file named app-deploy.yaml is also generated in your local project directory. This is the yaml file that is deployed onto your OpenShift cluster. You can further modify this file with extra configuration and reapply it by executing:

oc apply -f app-deploy.yaml

A Service Monitor created by the operations teams will be configured to monitor a deployment with specific labels. Communicate with your operations team to identify what this label key-value is. You will need to apply these labels to your app-deploy.yaml and redeploy it.

For example if the Service Monitor is watching for label app with the value demo:

metadata:
  labels: 
    app: demo

Alternatively, you may deploy your own service monitor through the Appsody operator where it will handle label matching for you. See the following section.

Configure monitoring for service monitors in other namespaces

By default, the Prometheus Operator installed via Operator Lifecycle Management only watches the monitoring namespace. In order to get the Prometheus Operator to detect service monitors created in other namespaces, you must get your ops team to apply the following configuration changes.

  1. In your monitoring namespace, edit the OperatorGroup to add your appsody-application namespace to the list of targeted namespaces to be watched. This changes the olm.targetNamespaces variable that the Prometheus Operator uses to detect namespaces to include in the appsody-application namespace.

     oc edit operatorgroup
    
       targetNamespaces:
       - prometheus-operator
       - appsody-application
    
  2. Since you changed the monitoring namespace’s OperatorGroup to monitor more than one namespace, the operators in this namespace must have the MultiNamespace installMode set to true. Prometheus Operator installed via OLM has the MultiNamespace installMode set to false, disabling monitoring more than one namespace. Make sure you change it to true:

     oc edit csv prometheusoperator.0.32.0
    
      spec:
       installModes:
       - supported: true
         type: OwnNamespace
       - supported: true
         type: SingleNamespace
       - supported: true
         type: MultiNamespace
       - supported: false
         type: AllNamespaces
    
  3. The same goes for the Grafana Operator. Edit the operator using:

     oc edit csv grafana-operator.v2.0.0
    
  4. In your monitoring namespace’s Prometheus instance, add the following to allow Prometheus to scrape from all namespaces:

      spec:
        serviceMonitorNamespaceSelector: {}
    
  5. Restart the Prometheus Operator and Grafana Operator pods to see the changes.

Deployment with Service Monitor

As an additional step, you can deploy a Service Monitor into your OpenShift cluster by modifying the app-deploy.yaml and redeploying it. For developers, this gives you more direct control over connecting your application deployment with Prometheus. Instead of waiting for a member of the operations team to configure a Service Monitor, you can do it yourself.

Add the following configuration:

  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ""

The Prometheus deployment may monitor Service Monitors with specific labels. In this example, the Prometheus deployment needs to monitor for Service Monitors with the k8s-app label. Additionally, the Prometheus deployment may only monitor namespaces with certain labels.

You need to communicate with your operations team to see what label is needed so that your Service Monitor and namespace gets picked up.

The basicAuth section defines what username and password you should use for authentication when accessing the /metrics endpoint.

In this example, metrics-liberty is a reference to a secret named metrics-liberty that contains the encrypted user name and password values. Either the developer or the operations team can create this secret. The secret needs to be created in the same project namespace as the application deployment and service monitor. See Configure Open Liberty Security to review how to set up authentication security for the underlying Open Liberty runtime.

The following code shows the app-deploy.yaml with the monitoring section.

apiVersion: appsody.dev/v1beta1
kind: AppsodyApplication
metadata:
  name: myAppsodyApplication
spec:
  # Add fields here
  version: 1.0.0
  applicationImage: demo-repo/java-microprofile-demo:latest
  stack: java-microprofile
  service:
    type: NodePort
    port: 9080
    annotations:
      prometheus.io/scrape: 'true'
  readinessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/ready
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  livenessProbe:
    failureThreshold: 12
    httpGet:
      path: /health/live
      port: 9080
    initialDelaySeconds: 5
    periodSeconds: 2
  monitoring:
    endpoints:
    - basicAuth:
        password:
          key: password
          name: metrics-liberty
        username:
          key: username
          name: metrics-liberty
      interval: 10s
      tlsConfig:
        insecureSkipVerify: true
    labels:
      k8s-app: ""
  expose: true
  createKnativeService: false

Analyze the Open Liberty logs

View logs using Kibana dashboards

Now that the Open Liberty runtime is emitting JSON-formatted logs, we can leverage the EFK stack to help us monitor these logging events. Fluentd collects the JSON data and sends it to Elasticsearch for storage and indexing. Kibana then visualizes the data.

Kibana dashboards are provided for visualizing events from the Open Liberty runtime. Retrieve available Kibana dashboards built for analyzing Liberty logging events here.

Note: To use these dashboards, logging events must be emitted in JSON format to the standard output. If you have not already configured the Open Liberty runtime to do so, please see Enable Open Liberty JSON Logging

Importing Kibana dashboards

To import the Kibana dashboards, complete the following steps:

  1. In your OpenShift Container Platform web console, go to your project that has the EFK stack installed and navigate to Networking > Routes to access the route exposed for Kibana.

    Access the route exposed for Kibana

  2. In Kibana, under Management > Saved Objects, click Import to browse through your filesystem for your desired dashboard.

    Import your dashboard

  3. You can view your imported dashboards under the Dashboards tab in the sidebar.

    Import your dashboard

  4. Click on your imported dashboard to see your log data visualized.

    See your log data visualized

View logs from the command line

To view logs from the command line, use the oc logs command as follows:

oc logs -f pod_name -n namespace

where pod_name is the name of your Open Liberty pod and namespace is the namespace your pod is running in.

You can use command-line JSON parsers, like JSON Query tool (jq), to create human-readable views of JSON-formatted logs. In the following example, the logs are piped through grep to ensure that the message field is there before jq parses the line:

oc logs -f pod_name -n namespace | \
  grep --line-buffered message | \
  jq .message -r

Monitor the health of your Java MicroProfile Appsody stack

MicroProfile Health allows services to report their readiness and liveness status, and it publishes the overall health status to defined endpoints. If a service reports UP, then it’s available. If the service reports DOWN, then it’s unavailable. MicroProfile Health reports an individual service status at the endpoint and indicates the overall status as UP if all the services are UP. A service orchestrator can then use the health statuses to make decisions.

Health data is available on the /health/live and /health/ready endpoints for the liveness checks and for the readiness checks, respectively.

Kubernetes provides liveness and readiness probes that are used to check the health of your containers. These probes can check certain files in your containers, check a TCP socket, or make HTTP requests. MicroProfile Health exposes readiness and liveness endpoints on your microservices, as described above, where Kubernetes will poll these endpoints as specified by the probes to react appropriately to any change in the microservice’s status.

These Kubernetes liveness and readiness probes are already pre-configured to the respective MicroProfile Health endpoints in the Appsody Operator and the MicroProfile Appsody stack configuration files, as follows.

You can read more information about Kubernetes liveness and readiness configuration here.

Monitor the metrics of your MicroProfile Appsody stack

A MicroProfile Metrics-enabled Open Liberty runtime is capable of tracking and observing metrics from the JVM and Open Liberty server, as well as tracking metrics instrumented within the deployed application. Metrics data is available on the /metrics endpoint. The tracked metrics data can then be scraped by Prometheus and visualized with Grafana.

There are IBM-provided Grafana dashboards that leverage the metrics tracked from the JVM as well as the Open Liberty runtime. Find the appropriate dashboards here.

Importing Grafana dashboards using Grafana Operator

In your OpenShift Container Platform web console, go to your project that has the Prometheus/Grafana operator stack installed and navigate to Installed Operators.

Under the installed Grafana Operator, click on Grafana dashboard.

View the Grafana dashboard

Here, you can see your existing dashboards. To start a new Grafana dashboard, click “Create Grafana Dashboard”.

Create the Grafana dashboard

Under the JSON definition in the yaml file, remove the pre-existing content and copy in your desired dashboard. Click Create to finish.

Copy in your desired dashboard

To see your dashboards visualized in Grafana, navigate to Networking > Routes and access the route exposed for Grafana.

Copy in your desired dashboard

Summary

Using the Open Liberty-powered Java MicroProfile Appsody stack, we’ve now configured a microservice that uses both MicroProfile Health and MicroProfile Metrics along with Liberty’s JSON logging for greater observability in combination with a variety of monitoring tools. We’ve integrated with powerful monitoring tools such as Elasticsearch, Fluentd and Kibana to retrieve, store, and visualize logging data. We have also used Prometheus and Grafana to help retrieve, store, and visualize metric data.

Next steps

David Chan
Prashanth Gunapalasingam
Ellen Lau