As cloud deployments become more complex, tooling to manage application performance and infrastructure resource monitoring becomes more critical. While application performance monitoring (APM) tooling like IBM Observability with Instana (Instana) are a well known category observing and troubleshooting microservice deployments, container platform and other cloud deployments have created a need for more sophisticated tooling to manage infrastructure. Turbonomic created their application resource management (ARM) tool that help operators understand and manage cloud infrastructure using policies centered around SLO to reduce operating expenses and improve cloud performance.
In this tutorial, we show how to deploy Turbonomic Platform Operator in OpenShift, connect it to your Instana deployment, and demonstrate how the two services work in concert to help manage your OpenShift deployments.
In order to complete this tutorial, you must complete this code pattern. The code pattern walks you through the following steps:
- Integrate Instana
- Deploy to OpenShift
- Generate Traffic and Analyze with Instana
Note: Turbonomic requires a history of application traffic to make performance or efficiency decisions. Before starting the Turbonomic platform setup, start the Puppeteer load generator as described in the Instana code pattern.
You will also need:
- IBM Cloud account.
- IBM OpenShift cluster.
- License for Turbonomic Platform. You can take advantage of Turbonomic via Red Hat Marketplace.
Completing this tutorial should take about 1 hour.
- Set up Turbonomic in your OpenShift cluster
- Manage the performance with Turbonomic
Set up Turbonomic in your OpenShift cluster
We’ll start with installing the Turbonomic platform operator in your OpenShift cluster. After installing the operator, you will need a license key to activate the instance and begin using it.
Run the following commands for this project to create the necessary roles and permissions for the platform operator.
$ oc project turbonomic $ oc create -f https://raw.githubusercontent.com/turbonomic/t8c-install/master/operator/deploy/cluster_role.yaml clusterrole.rbac.authorization.k8s.io/t8c-operator created $ oc create -f https://raw.githubusercontent.com/turbonomic/t8c-install/master/operator/deploy/cluster_role_binding.yaml clusterrolebinding.rbac.authorization.k8s.io/t8c-operator created $ oc adm policy add-scc-to-group anyuid system:serviceaccounts:turbonomic securitycontextconstraints.security.openshift.io/anyuid added to groups: ["system:serviceaccounts:turbonomic"]
In Red Hat OpenShift Container Platform Operator Hub, install a Turbonomic operator.
Find and install the Turbonomic Platform operator. Change the Installed Namespace to “Turbonomic” and click Subscribe.
The Turbonomic Platform operator is displayed in the Installed Operators.
Select the Turbonomic Platform Operator to open it, and click Create Xl to create an instance of an “XL” resource.
Click the Edit Form link. Then, enable these options:
- Kubeturbo (for local cluster management)
- Instana, to enable Instana as a source of data.
Ingress to create a route through which you can access the Turbonomic dashboard.
Scroll to the bottom of the list of options, and click the Create button.
Connect to the Turbonomic console. In the Red Hat OpenShift Container Platform, in the Networking > Routes section, click the link starting with “api-turbonomic” to open the dashboard.
Set up your admin username and password, and then upload your Turbonomic license.
Connect Turbonomic to Instana, so you can allow Turbonomic to collect metrics from Instana.
In the Turbonomic console, on the Team Settings page, click Create API Token to create an API key from Instana.
Connect to the Instana console by clicking Settings > Target configurations > New Target > Applications and Databases > Instana, and then add the Instana hostname and API key (which you just created).
Manage the performance of your OpenShift deployments with Turbonomic
First, let’s set up a policy:
Go to the Policies tile in Settings:
Create a new Business Application Policy for Bee Travels with the configuration shown in this screenshot:
With the policy configured, Turbonomic will analyze performance data over time to provide placement and resource sizing recommendations. Use the Puppeteer tool described in the Instana code pattern to generate traffic to the application. It can take 10-15 minutes of data and analysis for Turbonomic to build a list of recommended actions.
Now, let’s see how the application is performing.
Open your Instana dashboard. Navigate to Kubernetes > Your Cluster > bee-travels namespace. Select Pods and choose the Map view. Analyze the CPU Limits, CPU Requests, Memory Limits, and Memory Requests. For example, here’s what the CPU Limits look like for me before any improvements:
Now that enough traffic has been analyzed, we can go to the Turbonomic dashboard and navigate over to the Bee Travels application. You will notice red, yellow, and green sections of the circles on the Container Cluster section of the Business Application Tree on the left side of the screen. Click that circle and it will take you to the screen below:
You will notice that there are pending actions that Turbonomic suggests will improve the performance and efficiency of the application. Click on the Workload Controller element, then select Show All, and select all pending actions that are for Bee Travels services. Once they are selected, select the Apply Selected button to apply the actions.
At this point, pods will be resized and restarted, so there will be a few seconds of app down time. You can view progress in your OpenShift console.
After a few minutes or so to allow for the new pods to be resized and restarted with the performance improvements and traffic to the application to begin again, return back to the Instana dashboard to view improvements to the CPU Limits, CPU Requests, Memory Limits, and Memory Requests. For example, here’s what the CPU Limits look like for me after my improvements:
You should see that despite changes in resource utilization by the Bee Travels services, response time will either improve or remain the same.
Summary and next steps
In this tutorial, we showed how to set up a Turbonomic platform on your OpenShift instance, connect to Instana to monitor a running application, and set up a policy to allow Turbonomic to assess the resource allocation and placement of pods in a cluster. See the Turbonomic documentation for more information on configuration and usage.
For more details on how to use Instana and Turbonomic together, review this Instana + Turbonomic Solution Overview.