Kubernetes with OpenShift World Tour: Get hands-on experience and build applications fast! Find a workshop!

IBM Developer Blog

Follow the latest happenings with IBM Developer and stay in the know.

Istio 1.3 is here. Find out what changes we made to make the service mesh platform project easier to use.


Companies with large, monolithic applications are increasingly breaking these unwieldy apps into smaller, containerized microservices. Microservices are popular because they offer agility, speed, and flexibility, but they can be complex, which can be a hurdle for adoption. And having multiple microservices, rather than a single monolith, can increase the attack surface of the app.

Istio gives control back to app developers and mesh operators. Specifically, Istio is an open source service mesh platform that ensures that microservices are connecting to each other in a prescribed way while handling failures. With Istio, it’s easier to observe what is happening across an entire network of microservices, secure communication between services, and ensure that policies are enforced.

A new release of Istio, version 1.3, makes using the service mesh platform even easier.

Istio 1.3 improves usability

Because Istio has so many features, it was more complex than other open service mesh projects I’ve tried. If we were going to accomplish our goal of making Istio the preferred service mesh implementation, we had to make it easier for other developers to use.

Specifically, we had to simplify the process for developers to move microservices to the Istio service mesh, regardless of whether they wanted to leverage security, traffic management, or telemetry first. We created a User Experience Work Group that engaged with the community to improve Istio’s user experience. Through community collaboration across many work groups and the Envoy community, I’m excited to see these changes in Istio 1.3:

  • All inbound traffic will be captured by default. There is no need to declare containerPort in your Kubernetes deployment YAML for Istio to indicate the inbound ports you want your Envoy sidecar to capture.
  • A single add-to-mesh command in the CLI adds existing services to Istio mesh regardless of whether the service runs in Kubernetes or a virtual machine.
  • A describe command that allows developers to describe the pod and service needed to meet Istio’s requirements and any Istio-associated configuration.
  • Automatic protocol detection is implemented and enabled by default for outbound traffic, but disabled for inbound traffic to allow us to stabilize this feature. You will still need to modify your Kubernetes service YAML to name or prefix the name of the service port with the protocol for v1.3, but I expect this requirement to be eliminated in a future release.

Refer to Istio 1.3’s release blog and release note for more details about the release.

Istio 1.3 in action

A little over a year ago, I tried to move the popular Kubernetes guestbook example to run in the Istio mesh. It took a few days because I didn’t follow the documentation closely and discovered the proper documentation only after I finished. Injecting a sidecar didn’t cause me a problem; I was tripped up by the list of requirements to pods and services.

Using this same example, let’s see how using Istio 1.3 simplified this process.

I had already deployed the upstream guestbook sample in my Kubernetes cluster in IBM Cloud (1.14.6) by using the guestbook-all-in-one.yaml file. I uncommented out line 108 to use type: LoadBalancer for the front-end service:

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-69859f6796-4nj7p      1/1     Running   0          49s
frontend-69859f6796-772sw      1/1     Running   0          49s
frontend-69859f6796-n67w7      1/1     Running   0          49s
redis-master-596696dd4-ckcj4   1/1     Running   0          49s
redis-slave-96685cfdb-8cfm2    1/1     Running   0          49s
redis-slave-96685cfdb-hwpxq    1/1     Running   0          49s

I used the quick start guide to install Istio 1.3. The community is making progress to reduce the number of custom resource definitions (CRDs), and we are down to 23. We will continue to reduce the number of CRDs based on the Istio features the users install.

Once installed, I used the add-to-mesh command to add the frontend service to the Istio mesh:

$ istioctl x add-to-mesh service frontend
deployment frontend.default updated successfully with Istio sidecar injected.
Next Step: Add related labels to the deployment to align with Istio's requirement: https://istio.io/docs/setup/kubernetes/additional-setup/requirements/

As the result, my frontend pods had sidecar injected and running.

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-69577cb555-9th62      2/2     Running   0          25m
frontend-69577cb555-pctrg      2/2     Running   0          25m
frontend-69577cb555-rvjk4      2/2     Running   0          25m
redis-master-596696dd4-dzf29   1/1     Running   0          26m
redis-slave-96685cfdb-2h789    1/1     Running   0          26m
redis-slave-96685cfdb-7sp6p    1/1     Running   0          26m

I described one of the frontend pods to see if the pod and associated frontend service met the requirements for Istio.

$ istioctl x describe pod frontend-69577cb555-9th62
Pod: frontend-69577cb555-9th62
   Pod Ports: 80 (php-redis), 15090 (istio-proxy)
Suggestion: add 'version' label to pod for Istio telemetry.
--------------------
Service: frontend
   Port:  80/UnsupportedProtocol
   80 is named "" which does not follow Istio conventions
Pilot reports that pod is PERMISSIVE (enforces HTTP/mTLS) and clients speak HTTP

The code clearly told me what was missing! I needed to name the port for its protocol, which is HTTP in this case. The version label for telemetry is optional for me as the frontend service only has one version.

I edited the frontend service using kubectl.

$ kubectl edit service frontend

I added a line to name the service, using name: http and saved the change.

…
  ports:
  - nodePort: 30167
    port: 80
    protocol: TCP
    targetPort: 80
    name: http
…
service/frontend edited

I repeated the same istioctl x add-to-mesh service redis-master and istioctl x add-to-mesh redis-slave commands to add redis-master and redis-slave to the mesh. For redis, I just used the default tcp protocol, so there was no need to name the port.

$ kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
frontend-b8595c9f-gp7vx        2/2     Running   0          51m
frontend-b8595c9f-pr5nb        2/2     Running   0          51m
frontend-b8595c9f-q7fx9        2/2     Running   0          50m
redis-master-5589dc575-bqmtb   2/2     Running   0          51m
redis-slave-546f8d974c-gq4sn   2/2     Running   0          50m
redis-slave-546f8d974c-qjjwl   2/2     Running   0          50m

I added the frontend, redis-master and redis-slave services to the mesh! I visited the guestbook app using the load balancer IP:

$ export GUESTBOOK_IP=$(kubectl get service frontend -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ curl $GUESTBOOK_IP

I used the istioctl dashboard grafana command to launch Grafana, which served up the guestbook service metrics. Grafana metrics

From there, I used istioctl dashboard jaeger to launch Jaeger, which gave me the guestbook traces for each guestbook request. Note that they are individual trace spans–not correlated. I expected the individual trace spans because I would need to propagate the trace headers so that multiple trace spans could be tied to each individual request.

Grafana metrics

After that, I used istioctl dashboard kiali to launch Kiali, which allowed me to visualize the guestbook app.

Grafana metrics

I was able to observe the microservices at the app layer with Grafana, Jaeger, or Kiali, without needing to modify the service code or rebuild the guestbook container images! Depending on your business needs, you can either secure your microservices or build resilience into your microservices or control traffic by shifting as you develop newer versions of your microservices.

I believe the community is committed to make Istio easier to use as developers move their microservices into the mesh or out of the service. Live editing the service will not be necessary once the intelligent protocol detection feature is enabled by default for inbound traffic. This feature should be added soon.

Istio’s open source strength

Istio’s open architecture and ecosystem combine to make the technology effective. There is no vendor lock-in limiting the types of services you can use with it. Plus, Istio can run in any cloud model — public, private, on-premises, or a hybrid cloud model.

Istio was founded by IBM, Google, and Lyft in 2017, and a thriving community has grown around it to include other partners, such as Cisco, Red Hat, Pivotal, Tigera, Tetrate, and more. With over 400 contributors from over 300 companies at the time of this writing, the Istio project benefits from an active, diverse community and skill set.

Istio and IBM

IBM has been involved with Istio from before it was released to the public, with IBM donating our Amalgam8 project into Istio. As the second top contributor to the open project, IBMers now sit on the Istio Steering Committee and the Technical Oversight Committee, and co-lead the Environment workgroup, Build/Test Release workgroup, Performance and Scalability workgroup, User Experience workgroup, and Docs workgroup.

Why all the interest in Istio? With development teams looking to quickly scale, they need tools that can free them up to innovate and simplify how to build and deploy apps across environments — and that’s why IBM continues to invest in Istio.

Multiple cloud providers offer a managed Istio experience to simplify the install and maintenance of the Istio control plane. For example, IBM Cloud has a managed Istio offering that enables you to install Istio with a single action.

We know that our clients need to modernize their apps and app infrastructures, and we believe that Istio is a critical technology to help them do this safely, securely, and — with the new changes in the 1.3 release — easily.

Get involved with Istio

Because Istio is open source, we rely on an active community of developers to help improve, secure, and scale the tech. Here are a few ways for you to get involved:

Lin Sun is a Senior Technical Staff Member and Master Inventor at IBM. She is a maintainer on the Istio project and also serves on the Istio Steering Committee and Technical Oversight Committee.

Lin Sun