Lagom is an opinionated microservices framework for Java and Scala developers that has been designed to help you build, deploy, and run reactive systems with confidence. Lagom is part of Reactive Platform from IBM, offered in partnership with Lightbend to bring a modern message-driven runtime to Java and Scala.

You might assume that, being a microservices framework, Lagom would be a perfect fit for running in containers on a modern Kubernetes infrastructure. When tested, that assumption proves correct, as you’ll see in this tutorial.

The goal of this tutorial is to demonstrate that you can take an application developed within the Lagom framework and deploy it to a modern container-based Kubernetes infrastructure.

Getting started

To run a Lagom application on Kubernetes we need two things (which should be fairly obvious): a Lagom app and Kubernetes. For the app, we’ll use an example from the Lightbend team called Chirper. For Kubernetes, we can use Minikube or IBM’s Bluemix Container Service, depending on whether you want to run it locally or not.

Chirper is a Twitter-like application written in the Lagom framework that utilizes advanced features like Akka clustering and service discovery. It takes a number of Kubernetes resources to utilize these features correctly. Fortunately, the folks that maintain this project provide Kubernetes manifests to take advantage of features such as StatefulSet, Service, and Ingress. You can dig into helm charts or Kubernetes manifests at deploy/kubernetes/ after we clone the Chirper git repo in the next steps.

Chirper

Install the following programs on your system:

When you’ve finished installing, clone the Chirper source code locally and check out the stable branch:

$ git clone https://github.com/IBM/activator-lagom-java-chirper chirper
$ cd chirper
$ git checkout stable

IBM Cloud Private

Tools like Minikube and services like Bluemix Container Service are fantastic; however, you might not want to run your workloads on your laptop (I HOPE!), and sometimes you need to run your applications in your own data center. IBM Cloud Private (ICP) is a carefully built and curated Kubernetes installation that you can run anywhere.

Note: If you don’t have access to an IBM Cloud Private cluster, you can install one by following the instructions in the GitHub repo.

Configure kubectl using the instructions provided by your ICP cluster’s web interface. (Click Admin in top right corner and select Configure client.)

ICP configure client

To configure the client, enter the following commands:

$ kubectl config set-cluster cfc --server=https://169.46.198.xxx:8001 --insecure-skip-tls-verify=true
$ kubectl config set-context cfc --cluster=cfc
$ kubectl config set-credentials user --token=eyJhbGciOiJSUzI1NXXXSSSS
$ kubectl config set-context cfc --user=user --namespace=default
$ kubectl config use-context cfc
 

Check that you can interact with your ICP cluster using kubectl:

$ kubectl get nodes
NAME             STATUS    AGE       VERSION
169.46.198.XXX   Ready     23h       v1.7.3-7+154699da4767fd
169.46.198.XXX   Ready     23h       v1.7.3-7+154699da4767fd
169.46.198.XXX   Ready     23h       v1.7.3-7+154699da4767fd

Deploy Helm Tiller

Use Helm to deploy Tiller to Minikube:

$ helm init --upgrade
$HELM_HOME has been configured at /home/pczarkowski/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!
 

Wait a few moments and then check that it installed correctly:

helm version
Client: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.6.1", GitCommit:"bbc1f71dc03afc5f00c6ac84b9308f8ecb4f39ac", GitTreeState:"clean"}

Create the IBM Cloud Private Docker registry namespace

To create a new Docker registry namespace chirper, click the menu at the top left corner and select System > Organization > Namespaces > New Namespace.

Ensure that you have your local machine set up to communicate safely with your ICP cluster.

You also need to ensure that you have DNS set up properly for the cluster. If you don’t, you can fake it by adding the following to your /etc/hosts file:

IP-ADDRESS-OF-MASTER mycluster.icp
 

You also need to tell your local Docker to allow the CA cert that the communication was signed with. You can find the cert on your master node in /etc/docker/certs.d/mycluster.icp\:8500/ca.crt. You will need to copy it locally to the same path.

You should now be able to login using the user/password combination that you use to log into ICP:

$  docker login https://mycluster.icp:8500
Username: admin
Password: 
Login Succeeded
 

You also need to create a Kubernetes secret that we can use later:

$ kubectl create secret docker-registry regsecret --docker-server=mycluster.icp:8500 \
  --docker-username=admin --docker-password= --docker-email="none@here.com"

Build Chirper with Maven

Maven supports building Docker images, so building the application should be as simple as running the following:

$ mvn clean package docker:build
...
...
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 03:32 min
[INFO] Finished at: 2017-09-12T15:23:50Z
[INFO] Final Memory: 92M/801M
[INFO] ------------------------------------------------------------------------ 

$ docker images | grep chirper | grep "1\.0"
chirper/front-end                                     1.0-SNAPSHOT        bcdd958e3ab5        55 seconds ago       132MB
chirper/load-test-impl                                1.0-SNAPSHOT        9e46ef8b7443        About a minute ago   143MB
chirper/activity-stream-impl                          1.0-SNAPSHOT        b2610649d20f        About a minute ago   143MB
chirper/chirp-impl                                    1.0-SNAPSHOT        d90d06316151        2 minutes ago        143MB
chirper/friend-impl                                   1.0-SNAPSHOT        f6a9a1b0a900        2 minutes ago        143MB
 

Next, push the images to the ICP registry:

$ docker tag chirper/front-end:1.0-SNAPSHOT mycluster.icp:8500/chirper/front-end:1.0-SNAPSHOT && \
  docker tag chirper/activity-stream-impl:1.0-SNAPSHOT mycluster.icp:8500/chirper/activity-stream-impl:1.0-SNAPSHOT && \
  docker tag chirper/chirp-impl:1.0-SNAPSHOT mycluster.icp:8500/chirper/chirp-impl:1.0-SNAPSHOT && \
  docker tag chirper/friend-impl:1.0-SNAPSHOT mycluster.icp:8500/chirper/friend-impl:1.0-SNAPSHOT

$ docker push mycluster.icp:8500/chirper/front-end:1.0-SNAPSHOT && \
  docker push mycluster.icp:8500/chirper/activity-stream-impl:1.0-SNAPSHOT && \
  docker push mycluster.icp:8500/chirper/chirp-impl:1.0-SNAPSHOT && \
  docker push mycluster.icp:8500/chirper/friend-impl:1.0-SNAPSHOT

Deploy Chirper to IBM Cloud Private

The provided Helm Charts make deploying Chirper to ICP a fairly simple task. You can tweak some values in deploy/helm/values.yaml, but you should be able to simply deploy it.

Before deploying Chirper, we need to tell Helm to fetch any dependencies specified in deploy/helm/dependencies.yaml. (In this case, we depend on the Cassandra Chart.):

$ cd deploy/helm
$ helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
$ helm dependency update
 

You can now deploy Chirper using Helm:

$ helm install -n chirper . --values examples/ibm-cloud-private/values.yaml
NAME:   chirper
LAST DEPLOYED: Tue Sep 26 17:05:37 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Ingress
NAME             HOSTS  ADDRESS         PORTS  AGE
chirper-ingress  *      169.46.198.XXX  80     3s

==> v1/Service
NAME                           CLUSTER-IP     EXTERNAL-IP  PORT(S)                                       AGE
chirper-cassandra              None                  7000/TCP,7001/TCP,7199/TCP,9042/TCP,9160/TCP  4s
activityservice-akka-remoting  192.168.0.213         2551/TCP                                      4s
activityservice                None                  9000/TCP                                      4s
chirpservice-akka-remoting     192.168.0.211         2551/TCP                                      4s
chirpservice                   None                  9000/TCP                                      4s
friendservice-akka-remoting    192.168.0.35          2551/TCP                                      4s
friendservice                  None                  9000/TCP                                      4s
web                            192.168.0.144         9000/TCP                                      4s
nginx-ingress                  192.168.0.69       80:30763/TCP                                  4s
nginx-default-backend          192.168.0.80          80/TCP                                        4s

==> v1beta1/Deployment
NAME                      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
nginx-default-backend     2        2        2           0          4s
nginx-ingress-controller  2        2        2           0          4s

==> v1beta1/StatefulSet
NAME               DESIRED  CURRENT  AGE
chirper-cassandra  3        1        4s
activityservice    2        1        4s
chirpservice       2        1        4s
friendservice      2        1        3s
web                1        1        3s
 

You can watch the pods being created by using kubectl get pods -w as shown below. Once all pods are showing as Running you can press CTRL-C to back out to a prompt:

Note: Cassandra may take a while to complete as it has a fairly complex startup process to handle its clustering and other processes.

$ kubectl get pods -w
NAME                READY     STATUS    RESTARTS   AGE
NAME                                        READY     STATUS    RESTARTS   AGE
activityservice-0                           1/1       Running   0          51s
chirper-cassandra-0                         0/1       Running   0          51s
chirpservice-0                              1/1       Running   0          51s
friendservice-0                             1/1       Running   0          51s
nginx-default-backend-1114943714-prk59      1/1       Running   0          51s
nginx-ingress-controller-2689666257-5bf49   1/1       Running   0          51s
web-0                                       1/1       Running   0          51s
chirper-cassandra-0   1/1       Running   0         2m
chirper-cassandra-1   0/1       Pending   0         0s
chirper-cassandra-1   0/1       Pending   0         0s
chirper-cassandra-1   0/1       ContainerCreating   0         0s
chirper-cassandra-1   0/1       Running   0         1s
^C
$
 

When deployed, you should be able to access Chirper using its web UI. It should be made available by your ICP proxy nodes through http://mycluster.icp

Copy and paste the URL http://mycluster.icp into your browser:

Log in and make some “chirps”:

Clean up

You can uninstall the Chirper app with Helm as follows:

     $ helm del chirper --purge

1 comment on"Deploying a Lagom app to IBM Cloud Private"

  1. […] Deploying Lagom application on Kubernetes in IBM Cloud Private. Learn how to deploy microservices created with Lagom in an IBM Cloud Private environment. See how easy it is to develop a Lagom microservice once, then deploy it to any Kubernetes environment. […]

Join The Discussion

Your email address will not be published. Required fields are marked *