Manage microservices traffic using Istio  

Enable your microservices with advanced traffic management and request tracing capabilities using Istio

Last updated | By Anthony Amanse, Animesh Singh


Developers are moving away from large monolithic apps in favor of small, focused microservices that speed up implementation and improve resiliency. To meet the the requirements of this new ecosystem, developers need to create a network of deployed microservices with load balancing, advanced traffic management, request tracing and connective capabilities.


If you’ve spent any time developing apps recently, you know one thing: monolithic applications are becoming a thing of the past. Apps today are all about service discovery, registration, routing, and connectivity. They present a whole new set of challenges to developers and operators of microservice architecture.

If your service mesh is growing in size and complexity, you already know how challenging it can be to understand and manage. We came up against the same questions: How do we enable this growing number of microservices to connect, load balance, and provide role based routing? How do we enable outgoing traffic on these microservices, and test canary deployments? It’s not enough to create a self-contained application anymore, so how do we manage the complexity of a microservice universe?

Istio, a joint collaboration between IBM, Google and Lyft, is designed to help you meet these challenges. Istio is an open technology that provides a way for developers to seamlessly connect, manage and secure networks of different microservices, regardless of platform, source or vendor. In this developer journey, you’ll learn how Istio provides sophisticated traffic management controls, both for intercommunication between microservices and for incoming and outgoing traffic, through a container-based sidecar architecture. You’ll also learn how you can monitor and collect request traces to get better insights into your application traffic flow. This developer journey is ideal for anyone working with the new breed of microservice-oriented apps.


  1. The user deploys their configured app on Kubernetes. The application, “BookInfo,” is composed of four microservices. It’s written in a different languages for each of microservice: Python, Java, Ruby, and Node.js. The Reviews microservice, written in Java, has three different versions.
  2. To enable the application to use Istio features, the user injects Istio envoys. Envoys are deployed as sidecars on each microservice. Injecting an Envoy into the microservice means that the Envoy sidecar manages the incoming and outgoing calls for the service. The user then accesses the application running on Istio.
  3. With the application now deployed, the user configures advanced Istio features for the sample application. To enable traffic flow management, the user modifies the service routes of the application based on weights and HTTP headers. At this stage, version 1 and 3 of the Review microservice each get 50% of the traffic; version 2 is enabled only for a specific user.
  4. The user configures access control for services. To deny traffic from Review v3 any access to the Ratings microservice, the user creates a Mixer rule.
  5. After completing deployment and configuration of the application, the user enables telemetry and log collection. To collect metrics and logs, the user configures the Istio Mixer and installs the required Istio add-ons, Prometheus and Grafana. To collect trace spans, the user installs and configures the Zipkin add-on.
  6. The user creates an external data source for Bookinfo; for example, the Compose for MySQL database in IBM Cloud.
  7. Three microservices in the original sample BookInfo application — Details, Ratings, and Reviews — are modified to use the MySQL database. To connect to the MySQL database, a MySQL Ruby gem is added in the Details microservice; a MySQL module is added in the Ratings Node microservice. A mysql-connector-java dependency is added to versions 1, 2, and 3 of the Reviews microservice.
  8. The user deploys the application and enables the Envoy proxies with egress traffic. Envoy proxies are deployed as sidecars along side each microservice. This means that the Envoy sidecar will manage incoming and outgoing calls for the service. In this case, since Envoy supports only the http/https protocol, the proxies are configured not to intercept traffic for outgoing MySQL connections by providing the IP range for MySQL deployment. When the application is up and running, the user can access the application using the IP and Node ports.

Related Blogs

Jax 2018 – Just An Awesome Experience

What a week! From 23rd to 27th April our Berlin team attended the Jax conference in Mainz, Germany. We had such a great time sharing our fresh perspectives, in the form of a rousing keynote and two informative sessions. The concept of this annual event with over 2,000 participants, revolves around innovating with Java, architecture,...

Continue reading Jax 2018 – Just An Awesome Experience

CloudNativeCon and KubeCon are coming to Copenhagen!

With May just around the corner, mark your calendars for an exciting event, CloudNativeCon/KubeCon, in Denmark’s capital city of Copenhagen. Many of us in the Cloud Native community already visited this beautiful city for DockerCon EU last year and we’re excited to be able to take in all of the wonderful sites again this year....

Continue reading CloudNativeCon and KubeCon are coming to Copenhagen!

Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Event-driven analytics requires a data management system that can scale to allow a high rate of incoming events while optimizing to allow immediate analytics. IBM Db2 Event Store extends Apache Spark to provide accelerated queries and lightning fast inserts. This code pattern is a simple introduction to get you started with event-driven analytics. You can...

Continue reading Live analytics with an event store fed from Java and analyzed in Jupyter Notebook

Related Links