Lagom is an opinionated microservices framework for Java and Scala developers that has been designed to help you build, deploy, and run reactive systems with confidence. Lagom is part of Reactive Platform from IBM, offered in partnership with Lightbend to bring a modern message-driven runtime to Java and Scala.

You might assume that, being a microservices framework, Lagom would be a perfect fit for running in containers on a modern Kubernetes infrastructure. When tested, that assumption proves correct, as you’ll see in this tutorial.

The goal of this tutorial is to demonstrate that you can take an application developed within the Lagom framework and deploy it to a modern container-based Kubernetes infrastructure.

Getting started

To run a Lagom application on Kubernetes we need two things (which should be fairly obvious): a Lagom app and Kubernetes. For the app, we’ll use an example from the Lightbend team called Chirper. For Kubernetes, we can use Minikube or IBM’s Bluemix Container Service, depending on whether you want to run it locally or not.

Chirper is a Twitter-like application written in the Lagom framework that utilizes advanced features like Akka clustering and service discovery. It takes a number of Kubernetes resources to utilize these features correctly. Fortunately, the folks that maintain this project provide Kubernetes manifests to take advantage of features such as StatefulSet, Service, and Ingress. You can dig into helm charts or Kubernetes manifests at deploy/kubernetes/ after we clone the Chirper git repo in the next steps.

Chirper

Install the following programs on your system:

When you’ve finished installing, clone the Chirper source code locally and check out the stable branch:

$ git clone https://github.com/IBM/activator-lagom-java-chirper chirper
$ cd chirper
$ git checkout stable

Deploy Chirper to Bluemix Container Service (paid)

A paid Bluemix Container Service cluster has at least three worker nodes and gives us a lot of flexibility to deploy Chirper in a more production-like way with a load-balanced HA Chirper deployment.

Create Bluemix Container Service (paid) cluster

Use the Bluemix UI to create a Bluemix Container Service called chirper.

Note: You can use the command line interface (CLI), but you have to do some detective work to determine what values to use for machine type, location, vlan, and so on. For the sake of this tutorial, it’s much easier to use the web UI.

When the cluster has been created, we can switch to the Bluemix CLI and ensure that its ready: Note: For the sake of simplicity, this guide specifies the cluster as “chirper”; you can name it anything you like, just make sure to adjust any commands provided in the instructions to reflect your cluster name.


$ bx cs cluster-config chirper
OK
The configuration for chirper was downloaded successfully. Export environment variables to start using Kubernetes.
export KUBECONFIG=/home/xxx/.bluemix/plugins/container-service/clusters/chirper/kube-config-dal10-chirper.yml

$ export KUBECONFIG=/home/xxx/.bluemix/plugins/container-service/clusters/chirper/kube-config-dal10-chirper.yml
$ kubectl get nodes
NAME             STATUS    AGE       VERSION
10.177.184.248   Ready     2d        v1.7.4-1+5471fb38912193
10.177.184.252   Ready     2d        v1.7.4-1+5471fb38912193
10.177.184.253   Ready     2d        v1.7.4-1+5471fb38912193
 

Note: Ensure that the Kubernetes version on the nodes is at least v1.7.4. If not, you should upgrade the cluster before proceeding.

Deploy Helm/Tiller

Use Helm to deploy Tiller to Minikube:

$ helm init --upgrade
$HELM_HOME has been configured at /home/pczarkowski/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!
 

Wait a few moments and then check that it installed correctly:

helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

Create a Bluemix container registry

In production, we want to ensure that we own the availabilty and versions of the images that we'll run. To do so, we can use the Bluemix Container Service.

Note: For the sake of simplicity we used the namespace chirper; however, the namespace must be unique. You will need to give it a unique name other than chirper. Create a uniquely named namespace to replace chirper.

$ bx plugin install container-registry -r Bluemix
$ bx cr namespace-add chirper
Adding namespace 'chirper'...
Successfully added namespace 'chirper'
$ bx cr login                                      
Logging in to 'registry.ng.bluemix.net'...
Logged in to 'registry.ng.bluemix.net'.
OK

Build the Chirper Docker images

The next step is to build the Chirper containers. Maven supports building Docker images, so its fairly straightforward:

$ mvn clean package docker:build
...
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:32 min
[INFO] Finished at: 2017-09-12T15:23:50Z
[INFO] Final Memory: 92M/801M
[INFO] ------------------------------------------------------------------------ 
$ docker images | grep chirper | grep "1\.0"
chirper/front-end                                     1.0-SNAPSHOT        bcdd958e3ab5        55 seconds ago       132MB
chirper/load-test-impl                                1.0-SNAPSHOT        9e46ef8b7443        About a minute ago   143MB
chirper/activity-stream-impl                          1.0-SNAPSHOT        b2610649d20f        About a minute ago   143MB
chirper/chirp-impl                                    1.0-SNAPSHOT        d90d06316151        2 minutes ago        143MB
chirper/friend-impl                                   1.0-SNAPSHOT        f6a9a1b0a900        2 minutes ago        143MB
 

Next, push the images to the Bluemix registry (update the first command to reflect your registry namespace):

$ export NAMESPACE=chirper
$ docker tag chirper/front-end:1.0-SNAPSHOT registry.ng.bluemix.net/$NAMESPACE/front-end:1.0-SNAPSHOT && \
  docker tag chirper/activity-stream-impl:1.0-SNAPSHOT registry.ng.bluemix.net/$NAMESPACE/activity-stream-impl:1.0-SNAPSHOT && \
  docker tag chirper/chirp-impl:1.0-SNAPSHOT registry.ng.bluemix.net/$NAMESPACE/chirp-impl:1.0-SNAPSHOT && \
  docker tag chirper/friend-impl:1.0-SNAPSHOT registry.ng.bluemix.net/$NAMESPACE/friend-impl:1.0-SNAPSHOT
$ docker push registry.ng.bluemix.net/$NAMESPACE/front-end:1.0-SNAPSHOT && \
  docker push registry.ng.bluemix.net/$NAMESPACE/activity-stream-impl:1.0-SNAPSHOT && \
  docker push registry.ng.bluemix.net/$NAMESPACE/chirp-impl:1.0-SNAPSHOT && \
  docker push registry.ng.bluemix.net/$NAMESPACE/friend-impl:1.0-SNAPSHOT
 

For simplicity, we'll use the community provided images for both nginx and cassandra containers, but in a real production deployment you would likely want to tag and push them to your Bluemix registry.

Update values.yaml for Bluemix Container Service

Update the provided deploy/helm/examples/bluemix-container-service/values.yaml file to use our images. Do these for each of our chirper services (only one service is shown below as an example):

  front_end:
    replicas: 2
    image:
      repo: registry.ng.bluemix.net/chirper/front-end
      tag: 1.0-SNAPSHOT
 

Run bx cs cluster-info across your cluster to get your Ingress subdomain and Ingress secret:

$ bx cs cluster-get chirper | grep Ingress
Ingress subdomain:	chirper.us-south.containers.mybluemix.net
Ingress secret:		chirper
 

Update the provided deploy/helm/examples/bluemix-container-service/values.yaml file with our Ingress subdomain and Ingress secret:

chirper:
  ingress:
    host: chirper.us-south.containers.mybluemix.net
    tls:
      secret: chirper

Deploy Chirper

Switch to the Helm deploy directory and deploy:

$ cd deploy/helm
$ helm install -n chirper . --values examples/bluemix-container-service/values.yaml
NAME:   chirper
LAST DEPLOYED: Tue Sep 26 13:42:33 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Service
NAME                           CLUSTER-IP    EXTERNAL-IP  PORT(S)                                       AGE
chirper-cassandra              None                 7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP  2s
activityservice-akka-remoting  10.10.10.184         2551/TCP                                      2s
activityservice                None                 9000/TCP                                      2s
chirpservice-akka-remoting     10.10.10.244         2551/TCP                                      2s
chirpservice                   None                 9000/TCP                                      2s
friendservice-akka-remoting    10.10.10.107         2551/TCP                                      2s
friendservice                  None                 9000/TCP                                      2s
web                            10.10.10.68          9000/TCP                                      2s
nginx-ingress                  10.10.10.166         80/TCP                                        2s
nginx-default-backend          10.10.10.7           80/TCP                                        2s

==> v1beta1/Deployment
NAME                      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
nginx-default-backend     2        2        2           0          2s
nginx-ingress-controller  2        2        2           0          2s

==> v1beta1/StatefulSet
NAME               DESIRED  CURRENT  AGE
chirper-cassandra  3        1        2s
activityservice    2        1        2s
chirpservice       2        1        2s
friendservice      2        1        2s
web                1        1        2s

==> v1beta1/Ingress
NAME             HOSTS                                          ADDRESS  PORTS  AGE
chirper-ingress  chirper-new.us-south.containers.mybluemix.net  80, 443  2s
 

You can watch the pods being created by using kubectl get pods -w, as shown below. When all pods are showing as "Running" you can press the CTRL-C keys to back out to a prompt:

Note: Cassandra may take a while as it has a fairly complex startup process to handle its clustering and other processes.

$ kubectl get pods -w
NAME                READY     STATUS    RESTARTS   AGE
NAME                                        READY     STATUS    RESTARTS   AGE
activityservice-0                           1/1       Running   0          51s
chirper-cassandra-0                         0/1       Running   0          51s
chirpservice-0                              1/1       Running   0          51s
friendservice-0                             1/1       Running   0          51s
nginx-default-backend-1114943714-prk59      1/1       Running   0          51s
nginx-ingress-controller-2689666257-5bf49   1/1       Running   0          51s
web-0                                       1/1       Running   0          51s
chirper-cassandra-0   1/1       Running   0         2m
chirper-cassandra-1   0/1       Pending   0         0s
chirper-cassandra-1   0/1       Pending   0         0s
chirper-cassandra-1   0/1       ContainerCreating   0         0s
chirper-cassandra-1   0/1       Running   0         1s
^C
$
 

When the deployment finishes, you should be able to access Chirper in your browser. Since we're using Bluemix Container Service, we get a real URL to load as well as TLS offloading!

Paste your Ingress URL from above into your browser (either http or https!) and you're good to go!

Log in and make some "chirps":

Cleanup

You can uninstall the Chirper app with Helm like this:

$ helm del chirper --purge

1 comment on"Deploying a Lagom app using IBM Cloud Container Service"

  1. […] Deploying Lagom application on Kubernetes in IBM Cloud Container Services. (Need URL) Learn how to set up dependencies and deploy microservices using a command line interface. See how the Lagom framework is a perfect fit for running in containers on a modern Kubernetes infrastructure — using IBM Cloud with IBM Container Services. […]

Join The Discussion

Your email address will not be published. Required fields are marked *