Win $20,000. Help build the future of education. Answer the Call for Code. Learn more

Using Istio for advanced microservices deployments

Kubernetes and container technologies provide a wide range of options and flexibility to deploy applications in a fast-paced environment. Just as our applications can change, our deployments and methodologies for controlling access to the applications must adapt as well. One tool to accomplish this is a service mesh. Put simply, a service mesh allows us to build a malleable layer of network addressability to our ever-changing network of services. When we need to achieve 24/7 availability, yet still publish changes rapidly at scale, meshes are a valuable tool to abstract the deployment of services from the presentation of services to end-users.

Starting from the beginning

To explain, let’s start by building a sample application (based on the sample app in the github.com/IBM/nodejs-starter repo). This is a simple Node.js application with a static page that we’ll update and deploy in the following examples.

docker build -t istiodemo:v1 .
Sending build context to Docker daemon  421.4kB
Step 1/10 : FROM node:8-stretch
8-stretch: Pulling from library/node
146bd6a88618: Pull complete
9935d0c62ace: Pull complete
db0efb86e806: Pull complete
e705a4c4fd31: Pull complete
c877b722db6f: Pull complete
645c20ec8214: Pull complete
db8fbd9db2fe: Pull complete
1c151cd1b3ea: Pull complete
fbd993995f40: Pull complete
Digest: sha256:a681bf74805b80d03eb21a6c0ef168a976108a287a74167ab593fc953aac34df
Status: Downloaded newer image for node:8-stretch
 ---> 8eeadf3757f4
Step 2/10 : WORKDIR "/app"
 ---> Running in b116c13b8bc6
Removing intermediate container b116c13b8bc6
 ---> 79df7ce5d6f2
Step 3/10 : RUN apt-get update  && apt-get dist-upgrade -y  && apt-get clean  && echo 'Finished installing dependencies'
 ---> Running in 9221a8a30aa8
Ign:1 http://deb.debian.org/debian stretch InRelease
Get:2 http://security.debian.org/debian-security stretch/updates InRelease [53.0 kB]
Get:3 http://deb.debian.org/debian stretch-updates InRelease [93.6 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
….

docker images
REPOSITORY   TAG       IMAGE ID       CREATED         SIZE
istiodemo    v1        4df2c7c697f8   5 minutes ago   1.43GB

Now, we deploy and run the container:

docker run -d --name localdemo -p 3000 istiodemo:v1
66dab01e6d11f6356fcdcdcc6268a80a5f8e31cbfbd9bc4c938d5f4ea6b9d326

We see the deployed container running as follows:

docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS                     NAMES
66dab01e6d11   istiodemo:v1   "docker-entrypoint.s…"   6 seconds ago   Up 5 seconds   0.0.0.0:55000->3000/tcp   localdemo

Then, we go to the container’s port to view our default page at localhost:55000. Note the version number in the following screen capture image.

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

We create one more version of this app. Under the public/ dir, we edit index.html and find this block of code:

<h1>Congratulations!</h1>
    <h2>You are currently running a Node.js app built for the IBM Cloud.</h2>
    <h2> V1.0</h2>

And we modify it to be:

<h1>Congratulations!</h1>
    <h2>You are currently running a Node.js app built for the IBM Cloud.</h2>
    <h2> V2.0</h2>

Then, we build a container with the v2 tag:

docker build -t istiodemo:v2 .
Sending build context to Docker daemon  421.4kB
Step 1/10 : FROM node:8-stretch
8-stretch: Pulling from library/node
Digest: sha256:a681bf74805b80d03eb21a6c0ef168a976108a287a74167ab593fc953aac34df
Status: Downloaded newer image for node:8-stretch

By running and validating, this is what appears:

docker run -d --name localdemo2 -p 3000 istiodemo:v2
a9e783e9d861f2d2eb2ff515588e61714b65c8ac3054a285686e2448bf314d7f
$ docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS                     NAMES
a9e783e9d861   istiodemo:v2   "docker-entrypoint.s…"   5 seconds ago   Up 3 seconds   0.0.0.0:55001->3000/tcp   localdemo2
66dab01e6d11   istiodemo:v1   "docker-entrypoint.s…"   22 hours ago    Up 22 hours    0.0.0.0:55000->3000/tcp   localdemo

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Now, we push the Docker images to Docker Hub:

docker tag istiodemo:v1 mvelasc/istiodemo:v1
docker push mvelasc/istiodemo:v1
The push refers to repository [docker.io/mvelasc/istiodemo]
f011d6692442: Pushed
18ac6d8c624e: Pushed
f06a258f26ad: Pushed
63143c8b3400: Pushed
a6308506d558: Pushed
423451ed44f2: Pushed
b2aaf85d6633: Pushed
88601a85ce11: Pushed
42f9c2f9c08e: Pushed
99e8bd3efaaf: Pushed
bee1e39d7c3a: Pushed
1f59a4b2e206: Pushed
0ca7f54856c0: Pushed
ebb9ae013834: Pushed
v1: digest: sha256:0cf2bab8f3275abad959f616cee0cd9736ba6d6f27273e3cf16538a897cc46c0 size: 3264


docker tag istiodemo:v2 mvelasc/istiodemo:v2
$ docker push mvelasc/istiodemo:v2
The push refers to repository [docker.io/mvelasc/istiodemo]
dfb263d908b7: Pushed
18ac6d8c624e: Layer already exists
f06a258f26ad: Layer already exists
63143c8b3400: Layer already exists
a6308506d558: Layer already exists
423451ed44f2: Layer already exists
b2aaf85d6633: Layer already exists
88601a85ce11: Layer already exists
42f9c2f9c08e: Layer already exists
99e8bd3efaaf: Layer already exists
bee1e39d7c3a: Layer already exists
1f59a4b2e206: Layer already exists
0ca7f54856c0: Layer already exists
ebb9ae013834: Layer already exists
v2: digest: sha256:72b25fd458aaff0ab33a2e16a5eb3d9bce414927cc3dfcec2f31c68f29ae141c size: 3264

Deploying to the cloud

Now, let’s deploy it to Kubernetes. First, we create an IBM Cloud Kubernetes Service cluster. After the cluster is created, we deploy the Istio add-on.

Now, we deploy to the cluster. The following commands create a service that is comprised of two different versions (v1 and v2) of the microservice:

apiVersion: v1
kind: Service
metadata:
  name: istio-node
  labels:
    app: istio-node
spec:
  ports:
  - port: 3000
    name: http
  selector:
    app: istio-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: istio-node-v1
  labels:
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: istio-node
      version: v1
  template:
    metadata:
      labels:
        app: istio-node
        version: v1
    spec:
      containers:
      - name: istio-node
        image: docker.io/mvelasc/istiodemo:v1
        ports:
        - containerPort: 3000
        imagePullPolicy: Always
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: istio-node-v2
  labels:
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: istio-node
      version: v2
  template:
    metadata:
      labels:
        app: istio-node
        version: v2
    spec:
      containers:
      - name: meetup-node
        image: docker.io/mvelasc/istiodemo:v2
        ports:
        - containerPort: 3000
        imagePullPolicy: Always

Apply the following configuration:

kubectl apply -f ./istio_deploy.yaml
service/istio-node created
deployment.apps/istio-node-v1 created
deployment.apps/istio-node-v2 created

kubectl get deployments
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
istio-node-v1   1/1     1            1           8m32s
istio-node-v2   1/1     1            1           8m32s

Deploying the service mesh components

Deploying the service mesh components consists of a gateway to accept traffic for the mesh, a destination rule that defines the types of services available, and a virtual service that controls how we route requests and traffic.

We first create a gateway for Istio with the following commands:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: istio-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"

Apply the following configuration:

$ kubectl apply -f istio_gateway.yaml
gateway.networking.istio.io/istio-gateway created

Next, we create a destination rule. This step is important because it tells Istio about the different types of services that are in our deployment and the labels that they can be referenced by. In this case, we have two versions of the istio-node host (v1 and v2):

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: istio-destination
spec:
  host: istio-node
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Apply the following configuration:

kubectl apply -f istio_destinationrule.yaml
destinationrule.networking.istio.io/istio-destination created

Now let’s create a virtual service. The important part here is to map this virtual service to the deployment that we previously created, where we created an app named istio-node. Note that because we did not specify any additional information in the destination section, the user requests will cycle between the two deployed versions.

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiodemoservice
spec:
  hosts:
  - "*"
  gateways:
  - istio-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: istio-node
        port:
          number: 3000

Apply the following configuration:

kubectl apply -f istio_virtualservice.yaml
virtualservice.networking.istio.io/istiodemoservice created
$ kubectl get virtualservices
NAME               GATEWAYS          HOSTS   AGE
istiodemoservice   [istio-gateway]   [*]     8m31s

Now we can test. Let’s get our public IP for Istio access as follows:

kubectl get svc -n istio-system
NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                      AGE
istio-egressgateway    ClusterIP      172.21.109.19   <none>           80/TCP,443/TCP,15443/TCP                                                     122m
istio-ingressgateway   LoadBalancer   172.21.48.122   150.238.41.117   15021:31601/TCP,80:30028/TCP,443:30392/TCP,15012:30983/TCP,15443:30206/TCP   122m
istiod                 ClusterIP      172.21.60.2     <none>           15010/TCP,15012/TCP,443/TCP,15014/TCP

We can verify by using the aforementioned IP to our app. By refreshing, we get routed to v1 and v2 in a round-robin manner, as demonstrated in the following screen capture images:

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Isolating specific versions of a service

There are cases where you want to isolate requests to only one version of a microservice. This is where the subset configuration comes in handy, since it can be used to direct requests to only certain versions of a service. For example, if there is not a siphoning of traffic and you just want one version of the service to exist at a time, you can use the subset to direct to a specific version and then update the virtual service to the new version.

We can update the virtual service to only provide service to our v1 app as follows:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiodemoservice
spec:
  hosts:
  - "*"
  gateways:
  - istio-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: istio-node
        subset: v1
        port:
          number: 3000

Apply the following configuration:

kubectl apply -f istio_virtualservice.yaml
virtualservice.networking.istio.io/istiodemoservice configured

We verify, as demonstrated in the following screen capture:

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

We can now update the virtual service to now only show the v2 service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiodemoservice
spec:
  hosts:
  - "*"
  gateways:
  - istio-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: istio-node
        subset: v2
        port:
          number: 3000

Apply the following configuration:

kubectl apply -f istio_virtualservice.yaml
virtualservice.networking.istio.io/istiodemoservice configured

We verify, as demonstrated in the following screen capture:

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Siphoning and splitting traffic

Suppose you want to siphon or shift traffic from one version to another. You can apply weights to the subsets, and increase or decrease the weights as needed to a new version of a microservice.

This is accomplished by modifying our virtual service configuration. We add another destination, and add the subset and weight configurations for each destination. In this case, two versions of a microservice with 50/50 amounts of traffic split between them:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: istiodemoservice
spec:
  hosts:
  - "*"
  gateways:
  - istio-gateway
  http:
  - match:
    - uri:
        exact: /
    route:
    - destination:
        host: istio-node
        subset: v1
        port:
          number: 3000
      weight: 50
    - destination:
        host: istio-node
        subset: v2
        port:
          number: 3000
      weight: 50

Apply the following configuration:

kubectl apply -f istio_virtualservice.yaml
virtualservice.networking.istio.io/istiodemoservice configured

By verifying, you see that going to our app URL, about 50% of the traffic is sent to v1 and 50% is sent to v2. This is demonstrated in the following screen capture images:

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

You could modify the weights to siphon off more traffic from v1 (maybe v1 is an older service that we are cycling out). In this case, we adjust the weight to only 30% of traffic handled by v2 and the remaining 70% handled by v2:

- destination:
        host: istio-node
        subset: v1
        port:
          number: 3000
      weight: 30
    - destination:
        host: istio-node
        subset: v2
        port:
          number: 3000
      weight: 70

Apply the following configuration:

kubectl apply -f istio_virtualservice.yaml
virtualservice.networking.istio.io/istiodemoservice configured

Now, we see more responses from our v2 app than v1, as demonstrated in the following screen capture images:

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V1.0

Page view that says Congratulations! You are currently running a Node.js app built for the IBM Cloud. V2.0

Conclusion

In this article, we spent a lot of time experimenting with different types of routing and traffic distribution, but we never modified the deployed microservices. We deployed them once and then used the service mesh configuration to modify how users can gain access to the microservices. This is the power that a service mesh such as Istio can offer. It allows us to deploy versions of microservices independent of how we make them available to end users. Thus, we can make the services available instantly or gradually.

These concepts are the underpinnings of deployments such as canary, A/B builds, or region-based rollouts that deploy new versions of services in controlled manners. This does not require a change from the deployed services. For example, if a new microservice build fails, you can cut off user access to that new microservice and redirect back to the old version more quickly than performing a rollback of containers in Kubernetes.

Service meshes provide a layer of abstraction for the advanced networking capabilities and flexibility needed for today’s complex layers of microservices.