Kubernetes with OpenShift World Tour: Get hands-on experience and build applications fast! Find a workshop!

Reactive in practice, Unit 11: Deploy to Kubernetes

This unit focuses exclusively on Kubernetes as the deployment target for Reactive Stock Trader. This unit is not an exhaustive set of instructions on how to get Reactive Stock Trader ready for a real, secure production environment, but rather a source of inspiration as you work to package and deploy a proof of concept that you’ve built yourself.

We hope that by capturing our advice and demonstrating a few best practices through Reactive Stock Trader, this unit will serve as an accelerator for teams looking to move event-sourced microservices from development to production.

Kubernetes introduction

Kubernetes is a system for managing and deploying container-based apps. Think of Kubernetes as an operating system for distributed systems. Kubernetes helps to abstract away cloud hardware, letting developers focus on the way applications should behave with an effectively unlimited pool of resources.

In addition to providing an abstraction layer for cloud hardware, Kubernetes also acts as an enhanced platform for your entire DevOps team, further blurring the line between development and operations. While ‘serverless’ gets a lot of press, my opinion is that the real change coming is the blur between development and operations as distinct disciplines. Over the coming years, developers will be expected to understand the runtime characteristics of the systems they build, and traditional operations teams will perform more and more ops automation through programming using tools such as Knative.

Knative is a platform to help build serverless applications for Kubernetes (not to be confused with stateless applications or functions as a service). Knative “extends Kubernetes to provide a set of middleware components that… focus on solving mundane but difficult tasks such as deploying a container, routing and managing traffic with blue/green deployment, scaling automatically and sizing workloads based on demand, and binding running services to eventing ecosystems” (Knative documentation). Essentially, Knative is the programmatic glue between your existing systems and Kubernetes. If you require custom routing and traffic management, custom auto scaling strategies, custom production launch choreography such as tailored ‘blue-green’ or ‘canary’ deployments, you can create such strategies with Knative. We won’t cover Knative directly in this unit, but want to bring it to the attention of developers as a powerful tool in a team’s devops arsenal.

Kubernetes has a fairly steep learning curve, but we expect that learning curve to smooth out over the coming years as container orchestration platforms become mainstream. The learning curve with Kubernetes is less steep than learning how to deploy a reactive system without a tool like Kubernetes.

While working through this unit, remember that Kubernetes is relatively new, and almost all vendors of development tools and frameworks are working to add Kubernetes support. The space is fast-moving, so enter this chapter with the frame of mind that this is the start of your journey with Kubernetes and reactive systems at runtime, rather than the conclusion.

The good news is that Lagom includes a ton of extras to make deploying to Kubernetes a relatively smooth process. Let’s begin by discussing how production Lagom differs from Lagom at development time.

Why is container-management required for Lagom?

Lagom is as easy as it gets to work with during development: on a local workstation, developers launch Lagom with sbt runAll. This launches all microservices together as a complete system for rapid development and debugging, and ensures that dependencies such as Cassandra and Kafka are made available.

Let’s explore why this is so important, and what’s really going on under the hood.

Lagom is a platform for delivering reactive microservices. At development time, it’s easy to conceptually think of Reactive Stock Trader as a monolithic system. That’s the superpower of Lagom. While microservices are heavily hyped, they have productivity drawbacks during development phases of a project. To increase productivity, Lagom lets you conceptually treat a system as a monolith at development time while being able to treat the system as a microservices architecture in production. Let’s explain this in more detail.

Microservices have many benefits, but almost all of them are realized at runtime. The main benefit of microservices is the ability to optimize your system at runtime in very specific ways. For instance, one bounded context of your system may receive 100x the traffic of another bounded context, and perhaps the traffic comes in bursts rather than a sustained peak. Being able to package each bounded context as its own service and optimize each at runtime will pay huge dividends over time, preventing you from either overpaying for unused resources, or underpaying and experiencing outages. With a microservices architecture, we can closely monitor each service and tune individually, increasing the replication factor of each service as demand ebbs and flows. Having the flexibility to configure individual components at runtime is incredibly efficient.

While microservices may help to increase the scalability and resilience of individual components at runtime, they come at a cost during development time. Building out a system of dozens (or even hundreds) of microservices can introduce a significant amount of cognitive complexity to grasp when building a system from scratch. Imagine working on the ‘wire transfer’ bounded context in complete isolation from the rest of the services in Reactive Stock Trader. That’s how many microservices architectures are developed today, with each service in its own repo and minimal coordination between teams. This works well for established systems, but for teams getting to know a new domain, this becomes an incredibly inefficient way of development. During initial development, using a monorepo with a shared build makes a fast-moving project much easier to work with and reason about. Once the system is more mature, there’s no magic involved to physically separating each service all the way down to the repo. After all, all we need is service discovery to connect components together at runtime.

Working with a monolithic system can be easier to reason about because everything is in one place. For this reason, some well-known companies prefer monorepos over a single repo per microservice. Lagom leans towards the monorepo style, however, any external system you integrate with can be fully hosted in its own source control repository. There’s also nothing preventing you from refactoring a Lagom system into individual repos.

Essentially, Lagom enables the benefits of microservices at runtime with the convenience of monorepos and monoliths at development time.

Before we deploy Reactive Stock Trader as a collection of microservices, we need to install Minikube as a local Kubernetes environment, and then there are three dependencies that are (mostly) hidden away from developers during local development of a Lagom system: Akka Cluster, Cassandra, and Kafka. Let’s explore how to ensure we can support all three dependencies before we deploy Reactive Stock Trader to a local Kubernetes cluster.

Akka Cluster

The PubSub API requires Akka Cluster to function properly. We need the ability for Akka to form a cluster. This happens automatically during development time when launching with sbt runAll, but when deployed to a production-like environment, the complexity of forming a cluster is around the specifying of seed nodes, also called ‘seeding the cluster’.

A platform like Kubernetes, along with the Akka Management library, makes this process mostly hands-off once a proper configuration is specified.

For more details about how Akka Cluster works under the hood and how to manually form a cluster, read “Akka clustering” (IBM Developer, June 2019).

Cassandra

Persistent entities require a high-availability journal in order to function. Reactive Stock Trader uses Cassandra as the default journal under the hood, but any high-availability database can be substituted, such as PostgresDB. We’ll only cover Cassandra usage as a part of Reactive Stock Trader.

Kafka

The Message Broker API requires Kafka under the hood for high-availability. As you may remember from previous units, in production, Kafka is deployed across a number of nodes in order to guarantee high availability and reduce the odds of Kafka becoming unavailable or corrupted, such as losing messages that have already been delivered.

The good news is that it’s possible to get all three working on a local Kubernetes cluster! Now that we understand the dependencies of Reactive Stock Trader at runtime, let’s cover how to set up Kubernetes on your local development machine and deploy these dependencies.

Setting up Minikube

Minikube is a lightweight version of Kubernetes that we can install on our development box to test out a simulated production deployment of Reactive Stock Trader.

There are a few key differences between Minikube and a full Kubernetes cluster, the main difference being that we’ll be running on a single node rather than multi-node cluster. However, as you’ll see, moving to a multi-node cluster will eventually require only a few configuration changes. Most of the hard work involved in a production deployment of Lagom is ensuring the values between Lagom, sbt, and Kubernetes configuration specifications are correct. Once the configuration is correct and verified in Minikube, moving to a cloud-hosted version of Kubernetes is fairly straightforward.

For instructions on how to set up Minikube on MacOS, follow the guide that we’ve put together.

If you’re a Windows or Linux user, follow the official guide, instead.

Once you have Minikube successfully installed, come back to this unit and proceed.

Kubernetes 101

Let’s discuss some specific Kubernetes components and terminology, and how they relate to Lagom and reactive systems development. Some key terms include:

  • kubectl
  • Helm and Tiller
  • Operators
  • Controllers (pods, sets, and deployments)
  • Services

This unit is not meant to be an exhaustive set of instructions on Kubernetes, but will provide just enough information to get you started with a production-like deployment. The goal is to create an accelerated path to learning about reactive systems at runtime. In the final unit, Unit 12, we’ll provide recommendations on how to take your understanding to the next level.

kubectl

kubectl is the command tool used throughout this unit to interact with Kubernetes. Moving forward, this will be the main way that you’ll issue commands to your Kubernetes cluster.

For more information, check out the official documentation.

Helm and Tiller

Helm is a package manager for Kubernetes, similar to npm in the Node.js world. Helm actually comes in two parts: client and server.

Helm is the name of the client-side component, which you will see when installing Kafka and Cassandra. Helm can run on the command line, in your CI/CD pipeline, or as part of another automation script.

Tiller is the name of the server-side component of Helm, which we install in our Kubernetes cluster. It manages the release of Helm Charts.

Helm Charts are instructions that are bundled in a YAML file format.

The combination of Helm, Tiller, and Helm Charts provides a certain level of convenience for managing packages. For example, Kafka and Cassandra will be straightforward to install to Kubernetes because they are packaged with Helm Charts. But there are some drawbacks. Helm Charts lack key considerations, such as resource limits and network policies. This means that you can’t simply package a resource with Helm and treat it like a binary, similar to Homebrew, apt, or npm.

We’ll use Helm and Tiller in this unit, but as you get more comfortable with Kubernetes and perhaps start to package your own applications, we recommend using Operators instead.

Operators

Like Helm and Tiller, an Operator is a way of packaging, deploying, and managing a Kubernetes application. Operators are a platform for managing your Kubernetes applications.

We won’t spend much time covering operators in this unit, but we want to mention them for context. For instance, we could use the Operator framework to partially automate the deployment and management of Reactive Stock Trader. Operators will slowly replace Helm as the standard way of packaging applications, frameworks, and tools for Kubernetes.

Learn more about Operators.

Pods

A Pod is the fundamental execution unit of Kubernetes. A pod has a unique IP in the cluster, can provide storage, and has options for how its application contents (or containers) should run. As we progress further, we will package Reactive Stock Trader into several Docker containers. Each container will then be configured into its own pod. Then the pod (and its encapsulated container) will be deployed to the Kubernetes cluster. A pod can be replicated — multiple exact replicas of the same pod — and deployed to many nodes in our Kubernetes cluster. In a real Kubernetes cluster, we may have dozens of nodes and each pod may be replicated to take advantage of a large pool of those nodes.

Also note that if we have multiple containers that need to be run on the same machine for low latency purposes, we can create a single pod from multiple containers. This is called affinity and is supported by Kubernetes.

To demonstrate running pods, after we install Kafka using the Strimzi Helm chart (introduced later), we should be able to see all of the pods that Strimzi has deployed by executing the following command:

kubectl get pods --namespace=kafka

This will return all of the pods in the Kafka namespace:

NAME                                                     READY   STATUS    RESTARTS   AGE
reactivestock-strimzi-entity-operator-75497487dd-gxml8   3/3     Running   22         2d14h
reactivestock-strimzi-kafka-0                            2/2     Running   10         2d14h
reactivestock-strimzi-zookeeper-0                        2/2     Running   10         2d14h
strimzi-cluster-operator-5658bb5c6-l6pbn                 1/1     Running   9          2d14h

Namespaces provide a scope for naming of resources. Resource names must be unique within a namespace but not across namespaces. In the example above, all of the pods required for the successful operation of Kafka are organized into the kafka namespace.

Also, note the entry for reactivestock-strimzi-entity-operator-75497487dd-gxml8 pod. It has a Ready value of 3/3. For now, let’s define what the denominator is: It tells us that the reactivestock-strimzi-entity-operator-75497487dd-gxml8 pod has three containers running in its execution unit.

Controllers

A Controller in Kubernetes comes in different flavors, each one helping to manage one or more resources. Each controller is spread across nodes in the Kubernetes cluster as part of the control plane, which monitors a set of resources in a control loop as described in the official documentation.

In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the API server and makes changes attempting to move the current state towards the desired state. This is conceptually similar to robotics and automation, in which a control loop is a non-terminating loop that regulates the state of the system.

We will cover three controller types specifically: StatefulSet, ReplicaSet, and Deployment.

ReplicaSet

ReplicaSet is a controller configuration to ensure that a certain number of stateless pods are running at any given time. Rather than work directly with ReplicaSets, we’ll favor Deployments, which are a higher level concept.

Read the documentation for more details.

StatefulSet

StatefulSet is a more advanced type of controller that is used to control pods that require sticky identifiers, usually used for some type of ordered scheduling or persistent storage requirements. Cassandra uses StatefulSets to maintain persistence of the journal. Kafka uses StatefulSets to maintain persistence of topics, partitions, and offsets.

Read the documentation for more details.

Deployments

Deployments are the core declarative model of configuring pod requirements that we’ll work with in Reactive Stock Trader. Essentially, Deployments control declarative updates of Pods and ReplicaSets. This is how we’ll configure our Lagom microservices for deployment to Kubernetes.

Read the documentation for more details.

Now that we understand the basic tools that we will use in this unit, let’s put them to use by getting Cassandra and Kafka running on Minikube.

Deploy Cassandra and Kafka to Minikube

Reactive Stock Trader requires Cassandra to ensure that persistent entities stay persistent. That is, if one of our entities crashes or needs to be recovered from the journal for any reason, the state will be replayed from the events stored in Cassandra. Cassandra is also required for read-side processors to work correctly. These concepts were already covered earlier in the series, so we won’t go into great detail here.

We’ve put together a full set of instructions to get Cassandra and Kafka deployed to Minikube. Visit and execute the instructions before continuing.

Cassandra

The Cassandra module in Lagom comes out of the box ready to locate your Cassandra cluster using the service locator. In order words, Cassandra is like any other external service that Lagom may need to locate. Each microservice that contains a persistent entity will need to have a production configuration for Cassandra.

Let’s take a look at the configuration of wire-transfer-impl for inspiration.

Opening the src/main/resources/application.conf file, we can see a fairly basic configuration for Cassandra at development time is below:

application.conf

wiretransfer.cassandra.keyspace = wiretransfer

cassandra-journal.keyspace = ${wiretransfer.cassandra.keyspace}
cassandra-snapshot-store.keyspace = ${wiretransfer.cassandra.keyspace}
lagom.persistence.read-side.cassandra.keyspace = ${wiretransfer.cassandra.keyspace}

cassandra-query-journal {
  eventual-consistency-delay = 200ms
  delayed-event-timeout = 30s
}

What’s missing here is any additional connectivity information, such as IP addresses or ports for Cassandra. This is because Lagom handles Cassandra connectivity during development by default. However, when we move our microservices to Kubernetes, Lagom will need to know how to connect to a Cassandra cluster.

We’ll need to create a new file, application.prod.conf, to include instructions to use Cassandra contact points.

application.prod.conf

cassandra.default {
  contact-points = [${?CASSANDRA_CONTACT_POINT}]
  session-provider = akka.persistence.cassandra.ConfigSessionProvider
}
cassandra-journal {
  contact-points = ${cassandra.default.contact-points}
  session-provider = ${cassandra.default.session-provider}
}
cassandra-snapshot-store {
  contact-points = ${cassandra.default.contact-points}
  session-provider = ${cassandra.default.session-provider}
}
lagom.persistence.read-side.cassandra {
  contact-points = ${cassandra.default.contact-points}
  session-provider = ${cassandra.default.session-provider}
}

You can learn more about contact points.

The application.prod.conf file will be picked up by Kubernetes when executing the Pod at runtime through Kubernetes configuration, which we will explore later.

Kafka

By default, Lagom uses the service locator to look up bootstrap servers for the Kafka client. This can be overridden to specify a list of Kafka brokers in the service configuration, which is required before moving to production.

We used Strimzi for deploying Kafka to Kubernetes. Strimzi provides a series of operators for deploying and managing a Kafka cluster.

Let’s explore a small configuration we need to capture in a production configuration file:

application.prod.conf

lagom.broker.kafka {
  service-name = ""
  brokers = ${?KAFKA_BROKERS_SERVICE_URL}
}

We need to provide the URLs of the Kafka brokers. Rather than hardcode this, we’ll use an environment variable that Kubernetes can provide. To make sure the environment variable is picked up, we also need to set the service-name configuration to an empty string. If service-name configuration is an empty string, then the Lagom service locator lookup will not be done as it is during development, and the brokers configuration will be used instead, which is appropriate for production.

This type of ‘magic string’ configuration is not ideal. Explicit flags are much easier to reason about. When configuring Lagom for production, be on the lookout for these types of magic strings!

Let’s now cover how to package and deploy our reactive microservices, starting with the BFF. Then we will expand on Kafka, Cassandra, and Akka Cluster configuration.

Package and deploy Reactive Stock Trader

The Lagom development experience is a snap thanks to the automatic spin-up of dependencies, such as Cassandra and Kafka, completely integrated with sbt. When deploying Reactive Stock Trader to a more production-like environment, we need to think about how to configure and connect with a number of dependencies that were made transparent during development.

We have already provided a few basic configuration changes required for a real deployment. Now we will cover the remainder of the steps involved to get our services up and running on Minikube.

Before continuing, open up and follow these instructions, which will cover deploying the BFF to Minikube.

When you are done, you should be able to open up a browser, visit http://reactivestocktrader.com/healthz, and see the health check response of OK.

You should also be able to see the BFF and other pods running successfully after executing:

kubectl get pods --all-namespaces

The output should look similar to:

NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
cassandra     cassandra-0                                              1/1     Running   4          2d17h
default       reactivestock-bff-fc4894687-kqgj5                        1/1     Running   2          46h
kafka         reactivestock-strimzi-entity-operator-75497487dd-gxml8   3/3     Running   22         2d17h
kafka         reactivestock-strimzi-kafka-0                            2/2     Running   10         2d17h
kafka         reactivestock-strimzi-zookeeper-0                        2/2     Running   10         2d17h
kafka         strimzi-cluster-operator-5658bb5c6-l6pbn                 1/1     Running   9          2d17h
kube-system   coredns-fb8b8dccf-4kjnq                                  1/1     Running   8          2d17h
kube-system   coredns-fb8b8dccf-wltcb                                  1/1     Running   8          2d17h
kube-system   default-http-backend-6864bbb7db-j65zq                    1/1     Running   4          2d17h
kube-system   etcd-minikube                                            1/1     Running   4          2d17h
kube-system   kube-addon-manager-minikube                              1/1     Running   4          2d17h
kube-system   kube-apiserver-minikube                                  1/1     Running   4          2d17h
kube-system   kube-controller-manager-minikube                         1/1     Running   5          2d17h
kube-system   kube-proxy-qw4dg                                         1/1     Running   4          2d17h
kube-system   kube-scheduler-minikube                                  1/1     Running   5          2d17h
kube-system   kubernetes-dashboard-d7c9687c7-d29ln                     1/1     Running   6          2d17h
kube-system   nginx-ingress-controller-586cdc477c-vmjql                1/1     Running   7          2d17h
kube-system   storage-provisioner                                      1/1     Running   8          2d17h
kube-system   tiller-deploy-66b7dd976-hjl84                            1/1     Running   4          2d17h

Congratulations! Let’s cover in a little more detail how we accomplished this using sbt, Lagom, and Kubernetes.

Packaging for Docker

One of Lagom’s superpowers is sbt-native-packager, which lets us package each Lagom service in a Docker container without doing anything. For anyone who has suffered through manually creating and maintaining Dockerfiles, this is a big deal.

Let’s move from theory to practice and begin packaging Reactive Stock Trader for production.

If we open up the core build.sbt (in the root directory of reactive-stock-trader), we can see the definition of the BFF, which we just deployed.

Here’s a link to build.sbt in the GitHub repo. And here’s the section defining BFF:

lazy val bff = (project in file("bff"))
  .settings(commonSettings)
  .enablePlugins(PlayJava, LagomPlay)
  .disablePlugins(PlayLayoutPlugin)
  .dependsOn(
    utils,
    portfolioApi,
    brokerApi,
    wireTransferApi
  )
  .settings(
    name := "reactivestock-bff", // 1
    version := "0.1-SNAPSHOT", // 2
    libraryDependencies \++= Seq(
      lagomJavadslClient
    ),
    PlayKeys.playMonitoredFiles \++= (sourceDirectories in(Compile, TwirlKeys.compileTemplates)).value,
    // EclipseKeys.createSrc := EclipseCreateSrc.ValueSet(EclipseCreateSrc.ManagedClasses, EclipseCreateSrc.ManagedResources)
    EclipseKeys.preTasks := Seq(compile in Compile)
  )
  .settings(lagomServiceHttpPort := 9100) // 3
  .settings(dockerBaseImage := "openjdk:8-slim") // 4

Above, we can see that some of these settings directly correspond to what we see in Kubernetes. We can define the name of our Docker image by explicitly defining its name prefix (1) and version number (2). We can also control which port each service launches on under development (when running sbt runAll) (3). Finally, we need to specify a base image for Docker (4). That’s it!

Now, deploying a Lagom service to Minikube is a simple three-step process.

  1. Set Minikube as your Docker registry:

    eval (minikube docker-env)

  2. Build the Docker container using sbt and publish to your local Docker registry:

    sbt "bff/docker:publishLocal"

  3. Apply the deployment of the container to Minikube from the Reactive Stock Trader root folder:

    kubectl apply -f bff/deploy/kubernetes

The final kubectl apply command requires Kubernetes configurations to be properly specified, which we will explore next.

Kubernetes configuration

Each reactive-stock-trader service, including the BFF service, has a deploy/kubernetes folder, which contains all of the configurations we’ll need to get each service onto Kubernetes once we’ve built, packaged, and published each Docker container to our local registry.

Let’s cover each configuration file of the BFF service.

bff-config.yaml

First, we have bff-config.yaml, which specifies some of our substitutions for service discovery.

Here is a link to bff-config.yaml in the GitHub repo.

apiVersion: v1
kind: ConfigMap
metadata:
  name: bff-config
data:
  BROKER_SERVICE_URL: "http://reactivestock-broker-svc.default:9010"
  PORTFOLIO_SERVICE_URL: "http://reactivestock-portfolio-svc.default:9000"
  WIRETRANSFER_SERVICE_URL: "http://reactivestock-wiretransfer-svc.default:9020"

The key configurations in bff-config.yaml are the DNS values for each service that the BFF points to. These will be used by the BFF application.conf so that Lagom can properly initialize service discovery.

If you recall, we have service endpoints configured in application.conf. The ? character in application.conf means that we will use the environment variable if it exists, otherwise use whatever default value Lagom provides. In development, Lagom will provide service discovery out of the box; but in production, we’ll use the environment variables that we defined above in bff-config.yaml.

Here is a link to application.conf in the git repo. And here’s the section using the environment variables:

lagom.services {
  wiretransfer = ${?WIRETRANSFER_SERVICE_URL}
  broker = ${?BROKER_SERVICE_URL}
  portfolio = ${?PORTFOLIO_SERVICE_URL}
}

bff-deployment.yaml

This is the Kubernetes Deployment configuration that determines how the pod will be managed at runtime. It contains the name of our Docker image, port settings, and health check endpoints.

It’s a rather large file, so rather going through it line by line, you can parse through it yourself in its GitHub repo: bff-deployment.yaml.

The key settings are listed below.

The following section specifies the number of replicas, which is the number of BFF pods we would like running at any given time. Because we are deploying to Minikube, we will specify 1 replica, but in a production environment we may specify dozens of replicas.

   spec:
     replicas: 1
     selector:
       matchLabels:
         app: reactivestock-bff
     template:
       metadata:
         labels:
           app: reactivestock-bff
       spec:
         containers:
         - name: reactivestock-bff
           image: reactivestock-bff:0.1-SNAPSHOT
           ports:
           - name: bff-http
             containerPort: 9200
           envFrom:
             - configMapRef:
                 name: bff-config

bff-service.yaml

This is our service selector. A Service in Kubernetes is essentially a definition of a microservice. The configuration file lets us specify our mapping of IPs and ports. In our case, we wish to access the BFF on port 80, which will be mapped to the port specified in our deployment and the IP address of the Kubernetes cluster. Parse through the bff-service.yaml file to understand the BFF Service configuration.

reactivestock-ingress.yaml

Finally, we need to publicly expose a port; otherwise, we will not be able to connect to Reactive Stock Trader from outside the Kubernetes cluster. We do this with ingress files, as demonstrated below.

Here is a link to reactivestock-ingress.yaml in the GitHub repo.

    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata:
      name: reactivestock-bff
    spec:
      rules:
      - host: reactivestocktrader.com
        http:
          paths:
          - path: /
            backend:
              serviceName: reactivestock-bff-svc
              servicePort: 80

The main piece of configuration here is to map an external port (80) to an internal port, which is defined in the reactivestock-bff-svc configuration, defined in bff-service.yaml. We should only specify an ingress file for our BFF, as this is a key aspect of the BFF as a gateway pattern. Rather than expose our individual microservices to the world, they will be completely hidden from the public behind this BFF gateway, and access can be tightly controlled using ingress and egress settings.

Deploy the remaining microservices

If we were to launch Reactive Stock Trader right now, without deploying the other microservices, we would receive errors. To diagnose errors, let’s get a list of our running pods:

kubectl get pods --all-namespaces

This will give us a list, including the reactivestock-bff... pod:

NAMESPACE     NAME                                                     READY   STATUS    RESTARTS   AGE
cassandra     cassandra-0                                              1/1     Running   4          2d20h
default       reactivestock-bff-fc4894687-kqgj5                        1/1     Running   2          2d1h
...

We can then use kubectl to get the logs from the Play framework per pod, which makes diagnosing any errors fairly straightforward.

kubectl logs reactivestock-bff-fc4894687-kqgj5

If we grab the logs for the BFF pod before we deploy other microservices, or after a microservice crashes, we will see something like the following (or you may instead receive service location errors):

! @7cdn4ko3b - Internal server error, for (GET) [/api/portfolio] ->

play.api.UnexpectedException: Unexpected exception[CompletionException: java.net.ConnectException: Connection refused: reactivestock-portfolio-svc.default/10.105.151.27:9000]
    at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:347)
    at play.api.http.HttpErrorHandlerExceptions.throwableToUsefulException(HttpErrorHandler.scala)
    at play.http.DefaultHttpErrorHandler.throwableToUsefulException(DefaultHttpErrorHandler.java:227)
    at play.http.DefaultHttpErrorHandler.onServerError(DefaultHttpErrorHandler.java:182)
    …

Let’s deploy the other three services to fix this issue. Simply repeat the steps you did to deploy BFF. Remember, if you are using a ‘fresh’ terminal, set Minikube as your Docker registry before proceeding to avoid errors:

eval (minikube docker-env)

  1. Build the Docker container using sbt and publish to your local Docker registry. From the root reactive-stock-trader project folder execute the sbt publishLocal command for each of the three services:

     sbt "wireTransferImpl/docker:publishLocal"
     ...
     [info] Built image reactivestock-wiretransfer with tags [0.1-SNAPSHOT]
     [success] Total time: 58 s, completed 18-Jul-2019 12:13:05 PM
    
     sbt "portfolioImpl/docker:publishLocal"
     ...
     [info] Built image reactivestock-portfolio with tags [0.1-SNAPSHOT]
     [success] Total time: 27 s, completed 18-Jul-2019 12:17:40 PM
    
     sbt "brokerImpl/docker:publishLocal"
     ...
     [info] Built image reactivestock-broker with tags [0.1-SNAPSHOT]
     [success] Total time: 17 s, completed 18-Jul-2019 12:20:17 PM
    

    Verfiy that all three services are published to Minikube’s Docker registry:

    docker images

    You should see the three services in the returned list:

     REPOSITORY                     TAG                  IMAGE ID            CREATED             SIZE
    reactivestock-broker           0.1-SNAPSHOT         0db2e75e6cd4        2 minutes ago       370MB
    reactivestock-portfolio        0.1-SNAPSHOT         1cd2b977c00d        4 minutes ago       370MB
    reactivestock-wiretransfer     0.1-SNAPSHOT         65feedcff7eb        9 minutes ago       370MB
    ...
    
  2. Once each sbt publishLocal is complete, we then deploy the microservices to Kubernetes. kubectl allows you to specify a folder instead of an individual yaml file, which makes multi-step configurations per microservice (deploying multiple YAML files) simple.

    Execute the following:

     cd wire-transfer-impl
     kubectl apply -f deploy/kubernetes
     cd ..
    
     cd portfolio-impl
     kubectl apply -f deploy/kubernetes
     cd ..
    
     cd broker-impl
     kubectl apply -f deploy/kubernetes
     cd ..
    

    This will deploy the rest of the microservices! If you’re interested, you can inspect the individual microservice configurations by looking at application.prod.conf per service, and also the Kubernetes YAML files in the deploy folder.

Let’s explore a few of the configuration options that we needed to specify for the microservices. In order to demonstrate this, we’ll explore the wire-transfer-impl configuration.

wiretransfer-config.yaml

In this file, we’ve defined a few requirements to point to Cassandra and Kafka.

apiVersion: v1
kind: ConfigMap
metadata:
  name: reactivestock-wiretransfer-config
data:
  CASSANDRA_CONTACT_POINT: "cassandra.cassandra"
  KAFKA_BROKERS_SERVICE_URL: "reactivestock-strimzi-kafka-bootstrap.kafka:9092"
  ALLOWED_HOST: "reactivestock-wiretransfer-svc.default"
  PORTFOLIO_SERVICE_URL: "http://reactivestock-portfolio-svc.default:9000"
  • The Cassandra contact point hooks up the wiretransfer pod to Cassandra, which is already installed in our Kubernetes cluster.
  • We also provide a Kafka Brokers service URL, which points to Strimzi.
  • Finally, the wire transfer service requires a connection to the portfolio service for transfers, so we must provide a URL (including port).

These changes all correspond to wire-transfer-impl configuration settings in its application.prod.conf, which we will explore shortly.

wiretransfer-deployment.yaml

The last dependency that we need to account for is Akka Cluster.

   - name: AKKA_CLUSTER_BOOTSTRAP_SERVICE_NAME
     valueFrom:
       fieldRef:
         apiVersion: v1
         fieldPath: "metadata.labels['app']"

application.prod.conf

In our production configuration, we need to set up Akka Management and Akka Discovery.

akka.management.http.port = 8557
akka.management.http.bind-hostname = 0.0.0.0
akka.management.http.bind-port = 8557
akka.management.http.bind-hostname = ${?HTTP_BIND_ADDRESS}
akka.management.http.hostname = ${?HOST_ADDRESS}

# Akka Remote will also use the host ip for the bind-hostname
akka.remote.netty.tcp.hostname = ${?HOST_ADDRESS}

akka.discovery {
  method = kubernetes-api
  kubernetes-api {
      pod-namespace = "default"
      pod-label-selector = "app=%s"
      pod-port-name = "management"
  }
}

akka.management {
  cluster.bootstrap {
      contact-point-discovery {
        discovery-method = kubernetes-api
        required-contact-point-nr = 1
        port-name = "management"
      }
  }
}

# Shutdown if we have not joined a cluster after one minute.
akka.cluster.shutdown-after-unsuccessful-join-seed-nodes = 300s

Akka Management is a library that uses management extension libraries to help configure Akka at runtime. Lagom takes care of this under the hood, but the Akka naming conventions leak into Lagom. In other words, there are some basic configurations that need to be performed for Lagom to form a cluster, which is necessary to use the PubSub API. Under the hood, Lagom implements this with Akka and Akka Management, so you will need to provide some of these configurations when moving from development to production. You can see the basic production configuration above.

Akka Discovery is used by Lagom to provide service discovery, specifically for cluster bootstrap. This enables the formation of an Akka Cluster across a number of Kubernetes pods on a number of nodes, without having to manually specify seed nodes.

This now concludes the configuration portion of this unit. You should have a good handle on how to get your Lagom system ready for a Kubernetes deployment! Let’s discuss one key advantage of Kubernetes before wrapping up this unit: scaling.

Increasing the replication factor

By now, we should have all of our services successfully deployed, and if we execute the ‘get pods’ command, we should see all of our pods launched successfully.

kubectl get pods --namespace=default

We can use the --namespace=default switch to reduce the amount of noise in our listing, which will become even more important if we’re using Kubernetes to manage dozens of services.

NAME                                          READY   STATUS    RESTARTS   AGE
reactivestock-bff-fc4894687-kqgj5             1/1     Running   2          2d2h
reactivestock-broker-555df84fd6-cnkk2         1/1     Running   0          65m
reactivestock-portfolio-7749cc4c95-6zltf      1/1     Running   0          65m
reactivestock-wiretransfer-686d745dcb-n2d6w   1/1     Running   0          68m

Notice the 1/1 next to each pod. As noted earlier, the denominator indicates how many containers each pod has. The numerator indicates if the container(s) are in a “ready” state. As we see above, each reactive-stock-trader pod has one container deployed. The containers are the BFF Play app, and the Broker, Portfolio, and Wire Transfer Lagom microservices you recently deployed. Thus, 1/1 next to each pod indicates that they are all considered ‘ready’ by the Kubernetes cluster. Once Kubernetes considers a pod ready, it will begin to route user traffic to it.

Imagine a scenario in which our BFF gateway pod receives a significant amount of user traffic, much more than the broker microservice. We can handle this by increasing the replication factor of the BFF, but leave the rest of the microservices as they are.

Open up bff/deploy/kubernetes/bff-deployment.yaml and change replicas: 1 to replicas: 3. Then reapply the yaml with kubectl and get the list of pods again.

We will now see the following:

NAME                                          READY   STATUS    RESTARTS   AGE
reactivestock-bff-fc4894687-54p8k             1/1     Running   0          26s
reactivestock-bff-fc4894687-7vb96             1/1     Running   0          26s
reactivestock-bff-fc4894687-kqgj5             1/1     Running   2          2d2h
reactivestock-broker-555df84fd6-cnkk2         1/1     Running   0          68m
reactivestock-portfolio-7749cc4c95-6zltf      1/1     Running   0          69m
reactivestock-wiretransfer-686d745dcb-n2d6w   1/1     Running   0          71m

We can see that the BFF is replicated three times (with unique identifiers 54p8k, 7vb96, and kqgj5). There are now three reactivestock-bff-... pods for the BFF service.

If you recall, in our service definition (bff-service.yaml), we specified a ClusterIP type of service. Under the hood, ClusterIP is implemented by kube-proxy, which by default distributes requests (relatively) evenly among the pods backing the given service. The way to look at it is that:

  • We have a BFF service (reactivestock-bff-svc)
  • The BFF is backed by three pods (replicas: 3)
  • ClusterIP will more or less evenly distribute traffic between those three pods

This is only a small taste of what Kubernetes offers.

This unit has focused on open source Kubernetes. Some challenges exist in bare bones open source Kubernetes. For instance, running even a single node of the BFF on our 8GB cluster means that we’re almost completely out of resources. Scaling this up to three nodes means that we would need to provision new resources. In the case of Minikube, we need to delete our entire local cluster and recreate it with a higher memory limit. This is painful!

Dashboard

Commercial, enterprise-grade Kubernetes platforms, such as OpenShift, include features such as auto-scaling, where the platform can provision new nodes to expand the capacity of the cluster when it runs low, and then increase the replication factor to scale out according to traffic. Even better, these platforms are able to decommission unused resources and scale in services to run as efficiently as possible! Imagine if your application experiences a spike of traffic, and your cluster auto scales out to meet the demand, and then auto scales back in when the demand subsides? Gone are the days of guessing how much traffic you expect ahead of time in complex (and always incorrect) usage modeling exercises ahead of a deployment.

Conclusion

We’ve learned a lot of new concepts in this unit! Deploying a reactive system to a production environment can feel like an overwhelming process, but modern container management platforms like open source Kubernetes make it easier (but not easy). In my opinion, over the coming years, reactive-style systems will become the de facto standard on Kubernetes cluster infrastructure, with Kubernetes devops and operations becoming more and more accessible. Indeed, commercial, enterprise-grade Kubernetes platforms such as OpenShift have made enormous strides increasing Kubernetes usability and enterprise functionality.

The next unit will discuss opinions on operationalizing a system like Reactive Stock Trader far beyond the limits of what we’ve shown in this unit, and tie together all of the concepts we’ve learned through the entire series.

Kevin Webber