Microservces, meet DevOps

In our first post in this Microservices series, Jesus Almaraz introduced you to the What’s For Dinner application and how we’re building a reference application using the Netflix Open Source Software microservices framework to be deployed on IBM Bluemix (…or anywhere!). That post covered the BUILD phase of our work and all of that work will continue to be available in the BUILD branches of the linked GitHub repositories untouched (with obvious necessary bug fixes still made where appropriate).

Now we’re moving to the DEVOPS phase and how that impacts a microservices-based application. There are a few tenets of microservices (see Microservices in Action for a better overview), but one of the most important requirements for successful microservices is independence. That means each microservice can operate in its own context, deploy new versions, respond to dependent service outages gracefully, and otherwise do whatever it wants… as long as it still provides the expected optimal user experience. That is the main reason why we did all of the work in the BUILD phase to integrate Netflix’s Eureka and Zuul components, for dynamic service registration and service proxying.

The main work that we’ve done in the DEVOPS phase is building Open Toolchains, which automatically build and deploy all of the necessary components for our What’s For Dinner menu application. The main difference between what we’ve done now and what we could do in the past is to have independent build and deploy pipelines, all functioning separately and independently, yet still aggregated through a single toolchain. This gives us control and oversight of what our application is doing, yet allows each of the individual microservices to evolve and to operate on its own.

Automated delivery pipelines are the key to DevOps

What's For Dinner Open Toolchain, integrating GitHub and Bluemix Delivery Pipeline
What’s For Dinner Open Toolchain, integrating GitHub and Bluemix Delivery Pipeline

Our reference application is built to be deployed via Cloud Foundry or Docker containers. We wanted to show how to build and operate pipelines with each type of technology. Similar to our larger end-to-end reference application, we could deploy each component in a different runtime or technology and still be absolutely functional because of the independence between microservices. However, since What’s For Dinner is a smaller microservices-focused reference application, we are focusing on one OR the other, and not both in the same pipeline.

Cloud Foundry pipelines

What's For Dinner Cloud Foundry deployment pipeline for the Appetizer Service
What’s For Dinner Cloud Foundry deployment pipeline for the Appetizer Service.

Keeping with the notion of microservice independence, we have our individual toolchains and delivery pipelines in separate GitHub repositories. To access the toolchain to deploy What’s For Dinner on Bluemix using Cloud Foundry, go to https://github.com/ibm-cloud-architecture/refarch-cloudnative-wfd-devops-cf. In the README of that repository, there is a Create Toolchain easy button that you can click and follow the directions to get What’s For Dinner running inside a space on your Bluemix organization. This toolchain will create GitHub integrations for all eight of the components required to deploy What’s For Dinner in its current state (as of the DEVOPS milestone), as well as to create additional Delivery Pipeline instances with all of the necessary code to perform Java builds and Cloud Foundry deployments. The beauty of the Open Toolchain integrations is that you can set up your entire application deployment in a single click. This can speed up development, testing, and deployment cycles through the use of everything as code.

NOTE: As a caveat, the applications deploy with a requirement of 512MB of RAM for the Cloud Foundry runtimes. You need to deploy into a space and organization that can support at a minimum 4GB of Cloud Foundry runtimes (see IBM Bluemix Cloud Foundry AppsPlanning Docs).

Container pipelines

What's For Dinner Container deployment pipeline for the Appetizer Service
What’s For Dinner Container deployment pipeline for the Appetizer Service

If you want to work with Docker containers to deploy What’s For Dinner instead, check out https://github.com/ibm-cloud-architecture/refarch-cloudnative-wfd-devops-containers to see the exact same user experience for deploying What’s For Dinner on the IBM Bluemix Container Service. Everything we already talked about with the Cloud Foundry pipeline is true here, with one important exception. The basic Cloud Foundry networking model exposes all components to public traffic once a route is mapped to an application. There are pros and cons to this practice (the details of which are outside the scope of this blog post), but there is more flexibility inside the IBM Bluemix Container Service’s networking model than the IBM Cloud Foundry Instant Runtimes’ networking model. The IBM Bluemix Container Service creates a private overlay network inside each of your Bluemix spaces. This allows you to deploy containers to Bluemix and have them communicate between each other, all without exposing any of them to public internet traffic.

In our container-based pipeline, we are only exposing traffic via public routes to the components which expect to receive external traffic and components which we can configure to route traffic appropriately. Namely, the Zuul Proxy and our Menu UI components receive public routes to be able to receive external traffic and then all the subsequent inter-service communications are done on the Container Service’s private overlay network, all without ever leaving Bluemix.

NOTE: As a caveat, the applications deploy with a requirement of 512MB of RAM for deployed Container instance. You need to deploy into a space and organization that can support at a minimum 4GB RAM quota of Container runtimes (see IBM Bluemix Container ServicePlanning Docs).

Ensuring zero-downtime, even while deploying new versions

We’ve separated our application components into a couple of different categories — mainly application services and infrastructure services. Updating these two distinct types of services often requires a different approach or more care for one or the other. In this post, I will refer to our infrastructure services as stable services and our application services as volatile services. The application components are not volatile in the way that we are expecting them to explode or erupt any time soon. They are simply built upon codebases that will actively evolve because they are the core of our application’s business logic. The majority of our infrastructure services are pretty stable, since they already do what we need them to do (hence why we are using them).

To ensure that we’re rolling out new versions of both sets of services, we’ve built our pipelines integration with the IBM Active Deploy service, available natively on Bluemix, to manage how we will roll out, scale up, and manage new versions of our application components. For more information on Active Deploy, see the official Bluemix Docs.

Volatile services

What's For Dinner Application - Volatile Services
What’s For Dinner Application – Volatile Services

Let’s talk about updating our application services, or the volatile ones, first. This is where the core of our work effort goes and this is where the core of the value to the business comes from. With What’s For Dinner being built on top of Netflix Eureka for Service Discovery, anything we spin up will automatically register to Eureka and be available for service routing. What Active Deploy ensures is that there will always be at least one instance of a given version available in Eureka while we are rolling out new versions.

Let’s take the UI microservice as an example. We push some changes to the UI GitHub repository to theme our UI for the upcoming seasonal holidays. This kicks off a change in the pipeline, builds a new version of our application, and pushes it to Bluemix (either Cloud Foundry or Containers are fine for this example). Active Deploy will now deploy our new version of the service without a public route initially, allowing our service to scale up to enough instances before adding a public route for us to test (either manually or automatically) that our new versions are what we expected. Once we are satisfied with our new version, Active Deploy then handles removing the old version instances and cleaning up the previous routes and metadata. This makes it really simple to “think, build, deploy” and can get your core business logic to market faster.

Stable services

What's For Dinner Application - Stable Services
What’s For Dinner Application – Stable Services

As we’ve shown, the speed with which we can deploy our volatile services is great. However, updating our stable services requires a bit more care, because all of our volatile services depend on these stable, infrastructure services. Most of the time, you will want to deploy your stable services with the capability to house them at a “well-known location”. This means DNS-based routing, Cloud Foundry-based service models, or simply a consistent way of looking up where these required services are without needing to know the hard-coded specifics of the individual instances providing the services. If you look at what we’ve done in the Eureka Active Deploy integration, you will see some additional logic to ensure that when our new versions of Eureka come online, the services that depend on Eureka will be able to find them. This is not something that usually occurs, as your stable services will evolve and version much less often than your volatile services, but we wanted to implement here to show you that it is still possible with all the same pieces (infrastructure, platform, code, and more).

Looking forward…

We’ve made it to our second blog post in our What’s For Dinner series, showing you how to build and automatically deploy your microservices-based applications, ensuring zero downtime across your application stack. Looking back at the metaphor we started with, we should be “walking” pretty easily now, with every code change that is pushed to your GitHub repositories triggering new builds and versions of your applications being deployed.

Where do we go from here? I called out that each microservice can do whatever it wants, as long as it’s providing the expected user experience. So far, we’ve handled everything we can that’s in our control. But what happens when the Dessert microservice stops responding to requests? What happens if there’s a network outage and our awesome UI can’t find the Menu Aggregator service? We solve those questions by building RESILIENCY into the application, through the use of more open source software. Our next post in this series will show how we’ve implemented Netflix’s Hystrix circuit break technology, along with Open Tracing / Zipkin for distributed tracing.

Join The Discussion

Your email address will not be published. Required fields are marked *