Manage microservices traffic using Istio  

Enable your microservices with advanced traffic management and request tracing capabilities using Istio

Last updated | By Anthony Amanse, Animesh Singh

Description

Developers are moving away from large monolithic apps in favor of small, focused microservices that speed up implementation and improve resiliency. To meet the the requirements of this new ecosystem, developers need to create a network of deployed microservices with load balancing, advanced traffic management, request tracing and connective capabilities.

Overview

If you’ve spent any time developing apps recently, you know one thing: monolithic applications are becoming a thing of the past. Apps today are all about service discovery, registration, routing, and connectivity. They present a whole new set of challenges to developers and operators of microservice architecture.

If your service mesh is growing in size and complexity, you already know how challenging it can be to understand and manage. We came up against the same questions: How do we enable this growing number of microservices to connect, load balance, and provide role based routing? How do we enable outgoing traffic on these microservices, and test canary deployments? It’s not enough to create a self-contained application anymore, so how do we manage the complexity of a microservice universe?

Istio, a joint collaboration between IBM, Google and Lyft, is designed to help you meet these challenges. Istio is an open technology that provides a way for developers to seamlessly connect, manage and secure networks of different microservices, regardless of platform, source or vendor. In this developer journey, you’ll learn how Istio provides sophisticated traffic management controls, both for intercommunication between microservices and for incoming and outgoing traffic, through a container-based sidecar architecture. You’ll also learn how you can monitor and collect request traces to get better insights into your application traffic flow. This developer journey is ideal for anyone working with the new breed of microservice-oriented apps.

Flow

  1. The user deploys their configured app on Kubernetes. The application, “BookInfo,” is composed of four microservices. It’s written in a different languages for each of microservice: Python, Java, Ruby, and Node.js. The Reviews microservice, written in Java, has three different versions.
  2. To enable the application to use Istio features, the user injects Istio envoys. Envoys are deployed as sidecars on each microservice. Injecting an Envoy into the microservice means that the Envoy sidecar manages the incoming and outgoing calls for the service. The user then accesses the application running on Istio.
  3. With the application now deployed, the user configures advanced Istio features for the sample application. To enable traffic flow management, the user modifies the service routes of the application based on weights and HTTP headers. At this stage, version 1 and 3 of the Review microservice each get 50% of the traffic; version 2 is enabled only for a specific user.
  4. The user configures access control for services. To deny traffic from Review v3 any access to the Ratings microservice, the user creates a Mixer rule.
  5. After completing deployment and configuration of the application, the user enables telemetry and log collection. To collect metrics and logs, the user configures the Istio Mixer and installs the required Istio add-ons, Prometheus and Grafana. To collect trace spans, the user installs and configures the Zipkin add-on.
  6. The user creates an external data source for Bookinfo; for example, the Compose for MySQL database in Bluemix.
  7. Three microservices in the original sample BookInfo application — Details, Ratings, and Reviews — are modified to use the MySQL database. To connect to the MySQL database, a MySQL Ruby gem is added in the Details microservice; a MySQL module is added in the Ratings Node microservice. A mysql-connector-java dependency is added to versions 1, 2, and 3 of the Reviews microservice.
  8. The user deploys the application and enables the Envoy proxies with egress traffic. Envoy proxies are deployed as sidecars along side each microservice. This means that the Envoy sidecar will manage incoming and outgoing calls for the service. In this case, since Envoy supports only the http/https protocol, the proxies are configured not to intercept traffic for outgoing MySQL connections by providing the IP range for MySQL deployment. When the application is up and running, the user can access the application using the IP and Node ports.

Related Blogs

Newsletters: The Curious Developer’s Best Friend

The great thing about software development is that there is always something new to learn! The terrible thing about software development is that there is always something new to learn! Luckily, there are tons of wonderful people sharing their knowledge every week in helpful and entertaining newsletters … and unfortunately, it can be really easy...

Continue reading Newsletters: The Curious Developer’s Best Friend

Kubernetes Upstream Contribution – 5 Do’s and Don’t

There is a good amount of documentation material out there on the Kubernetes community website which every contributor should read. However, if you are a new or intermediate contributor, or thinking to start contributing to Kubernetes upstream, hopefully, this post will help you understand some of the lessons that I have learned. This post discusses...

Continue reading Kubernetes Upstream Contribution – 5 Do’s and Don’t

Related Links