Win $20,000. Help build the future of education. Answer the call. Learn more

Archived | Troubleshoot microservices deployments with MicroProfile distributed tracing and Istio

Archived content

Archive date: 2021-02-25

This content is no longer being updated or maintained. The content is provided “as is.” Given the rapid evolution of technology, some content, steps, or illustrations may have changed.


In this code pattern, we’ll look at how Open Liberty, MicroProfile, and Open Tracing work alongside Istio to create an end-to-end view of requests flowing through a simulated manufacturing facility, based off of the original instrument craft shop. You’ll have an understanding of distributed tracing and a way to capture, visualize, and tell the story of what happens to an individual request.


The shift toward distributed, container-based microservice architectures brings with it a number of benefits but also drawbacks. While Kubernetes makes it easier than ever to split up monoliths into multiple, smaller services that use multiple network requests to complete every transaction, engineers and operations teams face challenges in observability, problem determination and root cause analysis.

Istio, a joint effort between IBM, Google, and Lyft creates a service mesh that can be integrated into a container orchestration platform like Kubernetes. While the set of technologies provided by Istio promises to enhance system observability, developers should be aware of new requirements to take advantage of these capabilities.


When deployed in a Kubernetes/Istio cluster by using the provided scripts, the sample application consists of six microservices, each of which can fail in various ways to demonstrate problem determination with distributed tracing. Requests into the ingress gateway move through the application in the following sequence.

  1. The Istio ingress gateway forwards the request to the service registered under the instrument-craft-shop name.
  2. The instrument-craft-shop service calls to the maker-bot service, which kicks off the “processing pipeline.”
  3. The “processing pipeline” consists of four steps, where each step runs in a separate pod.
  4. The maker-bot service waits for the entire pipeline to complete.
  5. If the pipeline completes, the final step in the sequence is a call from the maker-bot to the dbwrapper service. (Note: a real service could persist the object to a database, but in our case it sleeps for a short period of time before returning a response).



Ready to put this pattern to use? Complete details on how to get started running and using this application are in the README.