Introduction
The existing built-in Grafana dashboards provided by Red Hat OpenShift Container Platform clusters have a fixed template with many metric details. Whereas with a customized dashboard, a system administrator can focus only on required monitoring parameters. However, it is not easy to accomplish because of constraints with writing custom queries. This step-by-step tutorial explains how to deploy the community edition of the Grafana Operator and leverage an existing Prometheus as a Grafana data source to create customizable Grafana dashboards.
A bit of background
OpenShift Container Platform includes a Prometheus-based monitoring stack by default. However, this built-in monitoring capability provides read-only cluster monitoring and does not allow monitoring any additional target. The built-in monitoring feature monitors the cluster components such as pods, workspaces, nodes, and provides a set of Grafana dashboards that are non-customisable.
Prerequisites
- Red Hat OpenShift on IBM Cloud is a managed service that simplifies deployment and configuration of the OpenShift Container Platform.
- Red Hat CodeReady Containers are preconfigured OpenShift 4.1 (or newer) clusters. They offer a minimal OpenShift cluster environment for test and development purposes on local computers.
Deployment
Install Grafana Operator community edition
- Log in to the OpenShift Container Platform cluster console with the cluster-admin role.
Create a new project named Grafana.
$ oc adm new-project Grafana Created project Grafana
On the web console, click Operators, and then click OperatorHub.
Search for Grafana Operator and install the community edition of the Grafana Operator.
On Create Operator Subscription, under Installation Mode, click A specific namespace on the cluster; under Update Channel, click alpha; and under Approval Strategy, click Automatic. Then click Subscribe.
Check the pod status to see if the install is complete.
$ oc get pods -n grafana -o name pod/grafana-operator-655f76684-7jsz5
Create a Prometheus user
Before creating Grafana and Grafana data source instances, you need to create a special user in the existing Prometheus on openshift-monitoring project.
Navigate to the openshift-monitoring namespace:
$ oc project openshift-monitoring Now using project "openshift-monitoring" on server
Load prometheus-k8s-htpassword data in the tmp file:
$ oc get secret prometheus-k8s-htpasswd -o jsonpath='{.data.auth}' | base64 -d > /tmp/htpasswd-tmp
Create a special user to the existing Prometheus secret:
$ htpasswd -s -b /tmp/htpasswd-tmp grafana-user mysupersecretpasswd Adding password for user grafana-user
Check the content of /tmp/htpasswd-tmp for grafana-user:
$ cat /tmp/htpasswd-tmp | tail -1 grafana-user:{SHA}xxxxxxSwuJxNmjPI6vdZEyyyyy=
Replace the prometheus-k8s-secret data with our /tmp/htpasswd-tmp:
$ oc patch secret prometheus-k8s-htpasswd -p "{\"data\":{\"auth\":\"$(base64 -w0 /tmp/htpasswd-tmp)\"}}" secret/prometheus-k8s-htpasswd patched
Delete the Prometheus pods to restart the pods with new data:
$ oc delete pods -l app=prometheus pod "prometheus-k8s-0" deleted pod "prometheus-k8s-1" deleted $ oc get pods -l app=prometheus -o name pod/prometheus-k8s-0 pod/prometheus-k8s-1
Create Grafana instances
On the Grafana namespace, click Installed Operators > Grafana Operator > Create Instance on the Grafana card (shown below):
On Create Grafana YAML file, edit the name and spec.config.security.admin_user and spec.config.security.admin_password as required, and then click Create. The following image shows the YAML file used for creating the Grafana instance.
Make sure the Grafana pods are created and running:
$ oc get pods -n grafana -o name pods/grafana-deployment-689d864797-n4lpl pods/grafana-operator-655f76684-7jsz5
Create Grafana data source instances
- Click Installed Operators > Grafana Operator > Create Grafana DataSource instance.
Modify the metadata.name, spec.name, basicAuthUser, and basicAuthPassword in the Create GrafanaDataSource YAML file.
Note: Make sure to add spec.datasources.jsonData.tlsSkipVerify and spec.datasources.basicAuth set to true. Also, verify the preconfigured Prometheus url and add the right one under spec.datasources.url in the Create GrafanaDataSource YAML file.
The Operator automatically replaces the grafana-deployment-xxx-xxx pods to reflect the new configuration.
Now, you are ready to access the Grafana route:
$ oc get route NAME HOST/PORT grafana-route grafana-route-grafana.xxxx.appdomain.cloud
Import dashboards
Now, you export an existing dashboard from the built-in Grafana and import it into the new Grafana instance created using an operator to check if the Prometheus data source is integrated.
On the openshift-monitoring stack:
- Log in to the openshift-monitoring stack Grafana.
- Select any of the dashboard (for example, Kubernetes / Compute Resources / Workload).
Click the Share dashboard icon export tab and copy the .json file.
Now open the newly created Grafana instance route.
Go to Dashboard > Manage > Import. Paste the .json file and click Load.
Modify the Name as required and click Import to import the dashboard.
Review the dashboard.
- Once you import the dashboard onto the Grafana UI, create an instance of the Grafana dashboard to preserve the dashboards when Grafana instances are restarted.
Create Grafana DataSource instances
- On the Grafana namespace, create an instance of the Grafana dashboard (Operators > Grafana Operator > Grafana Dashboard).
- Copy the .json file of the dashboard imported into Grafana (as mentioned in Steps 3 and 4 under Import Dashboards).
- Paste the copied .json file under spec.json in the Create Grafana Dashboard YAML space.
- Modify the metadata.name as required and click Save.
With the custom Grafana dashboard, you can create a custom dashboard as required for individual pods, workspaces, namespaces, and more.
Summary
These custom Grafana dashboards with a preconfigured data source saves the computing resources on the cluster and provides the option to create your own dashboard views giving high level on compute resource usage for each application and making the monitoring easy for infrastructure administrators.