Tutorial

Monitoring your apps in Kubernetes with Prometheus and Spring Boot

Use Prometheus to expose the application metrics you need

In DevOps, creating and deploying an application is only one part of the CI/CD workflow. Application monitoring is critically important to ensure your application is always available and working correctly. Effective application monitoring continuously assesses the state of the application and the underlying infrastructure by gathering granular data about the application and metrics related to operating systems, like CPU and memory use and storage consumption.

Prometheus is an open source application monitoring system that offers a simple, text-based metrics format to give you an efficient way to handle a large amount of metrics data. With a powerful query language, you can visualize data and manage alerts. Prometheus supports various integrations, including with Grafana for a visual dashboard or with PageDuty and Slack for alert notifications. Prometheus also supports numerous products, including database products, server applications, Kubernetes, and Java Virtual Machines.

This tutorial shows how to address application monitoring for a Spring Boot application, using Docker and Helm in an IBM Cloud environment. IBM Cloud Kubernetes Service includes a Prometheus installation, so you can monitor your applications from the start.

Prerequisites

To complete the steps in this tutorial, you need to set up the following environment:

Estimated time

Completing this tutorial should take about 30 minutes.

Configure Prometheus for a Spring Boot application

The Prometheus capabilities that come with IBM Cloud include the following requirements and assumptions:

  • Only services or pods with a specified annotation are scraped as prometheus.io/scrape: true.

  • The default path for the metrics is /metrics but you can change it with the annotation prometheus.io/path.

  • The default port for pods is 9102, but you can adjust it with prometheus.io/port.

See the following Prometheus configuration from the ConfigMap:

$ kubectl describe cm monitoring-prometheus
...
  # Scrape config for service endpoints.
  #
  # The relabeling allows the actual service scrape endpoint to be configured
  # via the following annotations:
  #
  # * `prometheus.io/scrape`: Only scrape services that have a value of `true`
  # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need
  # to set this to `https` & most likely set the `tls_config` of the scrape config.
  # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
  # * `prometheus.io/port`: If the metrics are exposed on a different port to the
  # service then set this appropriately.
  - job_name: 'kubernetes-service-endpoints'
...
  # Example scrape config for pods
  #
  # The relabeling allows the actual pod scrape endpoint to be configured via the
  # following annotations:
  #
  # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`
  # * `prometheus.io/path`: If the metrics path is not `/metrics` override this.
  # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`.
  - job_name: 'kubernetes-pods'
...

The default assumptions and configurations do not fit the best practices in a Spring Boot application. However, you can use the following annotation for a simple adjustment.

  1. Enable Prometheus in the Spring Boot app.

    Adding further dependencies for Spring Boot makes the application ready for exposing Prometheus metrics through a new endpoint: /actuator/prometheus.

    The following example shows the Spring Boot 2.x pom.xml file with Prometheus dependencies:

     <dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-actuator</artifactId>
     </dependency>
    
     <!-- Prometheus Support with Micrometer -->
     <dependency>
       <groupId>io.micrometer</groupId>
       <artifactId>micrometer-core</artifactId>
     </dependency>
    
     <dependency>
       <groupId>io.micrometer</groupId>
       <artifactId>micrometer-registry-prometheus</artifactId>
     </dependency>
    

    After you start up, you can reach the new endpoint at localhost:8080/actuator/prometheus.

    See the following example from Prometheus endpoint:

     # HELP tomcat_global_received_bytes_total
     # TYPE tomcat_global_received_bytes_total counter
     tomcat_global_received_bytes_total{name="http-nio-8080",} 0.0
     # HELP tomcat_sessions_rejected_sessions_total
     # TYPE tomcat_sessions_rejected_sessions_total counter
     tomcat_sessions_rejected_sessions_total 0.0
     # HELP jvm_threads_states_threads The current number of threads having NEW state
     # TYPE jvm_threads_states_threads gauge
     jvm_threads_states_threads{state="runnable",} 7.0
     jvm_threads_states_threads{state="blocked",} 0.0
     jvm_threads_states_threads{state="waiting",} 12.0
     jvm_threads_states_threads{state="timed-waiting",} 4.0
     jvm_threads_states_threads{state="new",} 0.0
     jvm_threads_states_threads{state="terminated",} 0.0
     # HELP logback_events_total Number of error level events that made it to the logs
     # TYPE logback_events_total counter
     logback_events_total{level="warn",} 0.0
     logback_events_total{level="debug",} 0.0
     logback_events_total{level="error",} 0.0
     logback_events_total{level="trace",} 0.0
     logback_events_total{level="info",} 11.0
     # HELP jvm_gc_pause_seconds Time spent in GC pause
     # TYPE jvm_gc_pause_seconds summary
     jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 1.0
     jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.046
     jvm_gc_pause_seconds_count{action="end of minor GC",cause="Metadata GC Threshold",} 1.0
     ...
    
  2. Adjust the Helm template for Prometheus recognition.

    In Spring Boot 2.x, any monitoring endpoint under the context path /actuator and the port do not meet the expectation from Prometheus. To adjust, set the described annotations for the Service resource.

    Adjust the Helm service template to add the annotation, which registers the application to be scraped from Prometheus:

     {{- with .Values.service.annotations }}
       annotations:
     {{ toYaml . | indent 4 }}
     {{- end }}
    

    The corresponding values.yaml file looks like the following example:

     service:
       type: ClusterIP
       port: 80
       # Monitoring: Adjust Prometheus configuration
       annotations:
         prometheus.io/scrape: 'true'
         prometheus.io/path: '/actuator/prometheus'
         prometheus.io/port: 8080
     ...
    

    An alternative to the port definition is to use the filter.by.port.name: 'true' annotation and name the port with metric as a prefix. This change makes Prometheus collect the metrics from the correct port.

     apiVersion: v1
           kind: Service
           metadata:
             annotations:
               prometheus.io/scrape: 'true'
               filter.by.port.name: 'true'
             name: service-playground-service
           spec:
             ports:
             - name: metricsPrometheus
               targetPort: 8099
               port: 8099
               protocol: TCP
             - name: generalPort
               targetPort: 8443
               port: 8443
               protocol: TCP
    

    To verify the current helm templates, run the install command with --dry-run --debug and the server renders the helm templates and returns the resulting manifest files:

     $ helm install --dry-run --debug ./service-playground
    

    Deploying the application with the modified service resource registers the application to Prometheus and immediately begins the metrics gathering.

Create custom metrics

Integrating Prometheus libraries in Spring Boot results in a base set of metrics. If you need custom metrics, you can create your own metrics.

Metrics are uniquely identified by name and tags. The tags allow multiple views per dimension on the same metric. The following basic metrics are commonly supported:

  • Counter: A single metric, the count.

  • Timer: A metric for short-duration latency and frequency of the occurrence of an event (at minimum, including the total and county information).

  • Gauge: A metric that represents the current value, such as collection size.

The following code listing displays the counter integration for a Spring Boot REST endpoint. It is a Java snippet for Spring Boot plus the meter and Prometheus support with two counters:

@RestController
@RequestMapping("/data/v1")
public class DataRest {

  // Metric Counter to collect the amount of Echo calls
    private Counter reqEchoCounter;

  // Metric Counter to collect the amount of Timestamp calls
    private Counter reqTimestampCounter;

    public DataRest(final MeterRegistry registry) {

    // Register the Countere with a metric named and different tags
        reqEchoCounter = registry.counter("data_rest", "usecase", "echo");
        reqTimestampCounter = registry.counter("data_rest", "usecase", "timestamp");
    }

    @ApiOperation(value = "Delivers the given string back; like an Echo service.", response = String.class)
    @GetMapping("/echo/{val}")
    public String simpleEcho(@PathVariable(value = "val") String val) {

        reqEchoCounter.increment();
        return String.format("Data: {%s}", val);
    }

    @ApiOperation(value = "Delivers the given string with the current timestamp (long) back; like an Echo service.", response = String.class)
    @GetMapping("/timestamp/{val}")
    public String simpleEchoWithTimestamp(@PathVariable(value = "val") String val) {

        reqTimestampCounter.increment();
        return String.format("Data: %d - {%s}", System.currentTimeMillis(), val);
    }
}

The following code listing shows the result from the Prometheus endpoint with the two new counters:

# HELP data_rest_total
# TYPE data_rest_total counter
data_rest_total{usecase="echo",} 10.0
data_rest_total{usecase="timestamp",} 0.0
...

Verify the collected data

To verify the collected data, use the Grafana dashboard or work directly the Prometheus user interface:

  • Grafana Dashboard: https://<your cloud installation>:8443/grafana/
  • Prometheus: https://<your cloud installation>:8443/prometheus/

In your cloud installation, a preconfigured Grafana dashboard displays an overview of every namespace in the cluster, as shown in the following screen capture: Screen capture of Grafana dashboard

In the Prometheus Graph (as shown in the following screen capture), the new metric data_rest(_total) is added automatically. Screen capture of Prometheus Graph

The visualization of the metric (as shown in the following screen capture) helps you better understand the progress and current state of the metric: Screen capture of metric visualization

The log files are also collected in Kibana by default, as shown in the following screen capture: Screen capture of Kibana log files

Integrate with IBM Cloud

IBM Cloud includes a variety of services and integrations, such as the Logging and Monitoring integration using LogDNA and Sysdig. This tutorial takes a short excursion into IBM Cloud Monitoring. It includes capabilities to monitor and to define alerts and dashboards for any kind of workload in Kubernetes clusters and from external Linux machines.

A Sysdig agent collects metrics from any Kubernetes node and sends them to the Monitoring with Sysdig instance. With centralized monitoring entries, you can better visualize and investigate the data.

  1. Install the Sysdig agent.

    You need to install the Sysdig agent on any of your cluster nodes. You configure the Sysdig access token after you create a Sysdig instance in the IBM Cloud dashboard. Collect the information you need:

    • Click Observability.
    • Under Monitoring, click the IBM Cloud Monitoring instance.
    • Right click and select Display key. Remember the value to use when you install the Sysdig agent.

      Install the agent in the Kubernetes cluster by running the following commands:

      $ export SYSDIG_ACCESS_KEY=c23c1ee6-....
      $ export COLLECTOR_ENDPOINT=ingest.eu-de.monitoring.cloud.ibm.com
      $ export TAG_DATA=region:eu-de,env:test
      
      $ curl -sL https://raw.githubusercontent.com/draios/sysdig-cloud-scripts/master/agent_deploy/IBMCloud-Kubernetes-Service/install-agent-k8s.sh | bash -s -- -a $SYSDIG_ACCESS_KEY -c $COLLECTOR_ENDPOINT -t $TAG_DATA -ac 'sysdig_capture_enabled: false'
      
      Detecting operating system
      Downloading Sysdig cluster role yaml
      Downloading Sysdig config map yaml
      Downloading Sysdig daemonset v2 yaml
      Creating namespace: ibm-observe
      Creating sysdig-agent serviceaccount in namespace: ibm-observe
      Creating sysdig-agent clusterrole and binding clusterrole.rbac.authorization.k8s.io/sysdig-agent created
      Creating sysdig-agent secret using the ACCESS_KEY provided
      Retrieving the IKS Cluster ID and Cluster Name
      Setting cluster name as mycluster
      Setting ibm.containers-kubernetes.cluster.id bkqb1e4f0nrponecah6g
      Updating agent configmap and applying to cluster
      Setting tags
      Setting collector endpoint
      Adding additional configuration to dragent.yaml
      Enabling Prometheus
      configmap/sysdig-agent created
      Deploying the sysdig agent
      daemonset.extensions/sysdig-agent created
      
      $ kubectl get pods -n ibm-observe
      NAME                 READY   STATUS    RESTARTS   AGE
      sysdig-agent-k6f44   1/1     Running   0          98s
      
  2. Verify the collected data with Sysdig.

    Metrics are displayed in the Sysdig dashboard at cloud.ibm.com/observe/monitoring:

    Screen capture of IBM Cloud Monitoring dashboard

    The Prometheus metrics are incorporated in the dashboard, as shown in the following screen capture:

    Screen capture of Prometheus metrics in the IBM Cloud Monitoring dashboard

    Also, you see the application metrics that you created, as shown in the following screen capture:

    Screen capture of customized app metrics IBM Cloud Monitoring dashboard

  3. Troubleshoot any issues.

    If the Prometheus metrics are not visible, verify if the Prometheus integration is enabled in the Sysdig dashboard. Also check if the annotations are set correctly.

    See the following Kubernetes resource for deployment with Prometheus annotations:

     spec:
     selector:
     matchLabels:
       app: service-playground
     replicas: 1
     template:
     metadata:
       annotations:
         prometheus.io/scrape: 'true'
         prometheus.io/path: '/actuator/prometheus'
         prometheus.io/port: '8080'
    

    For more information, see the IBM Cloud Monitoring documentation.

Summary

This tutorial described how to configure an application to provide metrics that are automatically collected from Prometheus. With minimal effort, you can achieve more transparency and insight into your own application. These adjustments should be an integral part of your applications. In the ecosystem of Spring Boot, Prometheus is only one of multiple supported monitoring systems. If you use IBM Cloud, you can integrate with Prometheus, AlertManager, Grafana, and Kibana by default and take the first steps for a better understanding of the application insights. Integrating with Sysdig simplifies the insights and is a key factor in centralized application monitoring.

An example of a Spring Boot application that demonstrates this configuration is available in GitHub at Hafid-Haddouti/service-playground.