Kubernetes with OpenShift World Tour: Get hands-on experience and build applications fast! Find a workshop!

A containerization case study with Docker

Software as a service (SaaS) is a software delivery model where both the software and the associated data are centrally hosted on the cloud. In this model, application functionality is delivered through a subscription over the internet. But SaaS solutions are constantly evolving.

Research and product development teams are always adding layers, features, tools, and plug-ins. SaaS is cheap, smart, sexy, and constantly on the edge. All these points make a SaaS solution a serious option for running a business. According to a study conducted by North Bridge Venture Partners, “45% of businesses say they already, or plan to, run their company from the cloud – showing how integral the cloud is to business”.

The evolution of traditional products toward SaaS can be approached in different ways. (We use the term “traditional” to identify products that are not cloud-native.) The easiest approach is porting the product on cloud, which might be a good step forward if you don’t want to risk starting a migration to a cloud-native product but you want the typical advantages of moving to cloud (for example, IT delegation, no infrastructure and maintenance costs, and higher security). This cloudification process is basically a so-called “lift-and-shift” migration: the product is ported “as is” on an infrastructure as a service (IaaS) cloud provider.

But what if you want to be a provider and offer an SaaS business to your own customers? Is it possible to refactor the entire product architecture and try to reproduce the strength of a cool, cloud-native SaaS product? Are high-availability, scalability, microservices, and live updates reproducible for an existing technology-based SaaS offering, and can this approach be competitive? In the short term, can existing applications be SaaSified?

The main objective of this article is to show a further evolution of the basic cloudification process: to leverage containerization to address the previously described questions and achieve the benefits of a pure, cloud-native product. Specifically, this article discusses an example of the approach that our team used to move the IBM Control Desk product to a microservices pattern using Docker container technology, without the need to redesign the product or touch the code.

The application was split into its basic components and deployed on different WebSphere Liberty containers to achieve a more manageable provisioning pattern – both in time-to-market and in overall IT operations activities.

Our example: the IBM Control Desk existing solution

IBM Control Desk provides IT service management to simplify support of users and infrastructures. It was built on the Tivoli Product Automation Engine component embedded in the IBM Maximo Asset Management product.

The standard architecture consists of the following parts:

  • A Java Enterprise application (UI and back end).
  • A database.
  • A Node.js application (a service portal UI).
  • A web server (a load balancer).

As a Java Runtime Environment, WebSphere Application Server for Network Deployment was the typical choice, because the database manager supports Oracle, DB2 and Microsoft SQL Server. The most common web server option was IBM HTTP Server, especially for working with WebSphere Application Server for Network Deployment, as shown in the following diagram:

image

The Maximo Asset Management deployment guide, which also included best practices, explained how to split the all-in-one application into four different applications: Maximo User Interface, Maximo Cron, Maximo Report, and the Integration Framework. The cost in terms of effort for achieving this pattern was entirely owned by the local IT team, without any default procedure that supported IT engineers. Maximo Asset Management 7.6.1 included so-called “Liberty support” by further splitting the applications and providing a suite of build scripts that builds and bundles only the modules needed by the application role.

IBM Control Desk 7.6.1 was built on top of Maximo Asset Management 7.6.1 and inherited the Liberty support that is used for achieving microservice decomposition.

Our deployment path to containerize the application

The deployment path our team used illustrates how to “SaaSify” an application.

Our team’s process included the following tasks:

  • Install IBM Control Desk 7.6.1 on the administrative workstation node.
  • Deploy IBM Control Desk database on a DB2 node.
  • Build a Docker image for the IBM Control Desk and Service Portal.
  • Build a Docker image for the JMS server.
  • Create the network for allowing direct communication among containers.
  • Run one container for each Docker image we built.
  • Configure an IBM HTTP Server for correctly routing traffic to the containers.

Our first step: Installing and deploying IBM Control Desk 7.6.1

For the purpose of this article, we decided to use the same node as the administrative workstation and the DB2 node. We used a Red Hat Enterprise Linux Server 7 based virtual machine with two CPUs and 4 GB RAM, installed with the IBM Control Desk product and deployed on the MAXDB76 database.

The IBM Control Desk installation directory (which contains the application build code) and the Service Portal installation directory are shared through the network file system (NFS) with the Docker engine. Therefore, the applications are available for the build on the both nodes.

Our second step: Building the Docker images

We decided to strictly follow what is stated in Maximo Asset Management tech note, so we produced five different applications:

  • UI
  • Cron
  • Maximo Enterprise Adapter (MEA)
  • API
  • Report (Business Intelligence and Reporting Tools – BIRT – Report Only Server or BROS)

Then, we added two applications for a service portal and the JMS server, which receives, stores, and forward messages wherever a JMS protocol is used.

  • Service portal (SP)
  • JMS Server (JMS)

The following illustration shows a pictorial representation of the architecture:

image

We built the applications by following instructions at Maximo Asset Management 7.6.1 WebSphere Liberty Support, which produces a series of web archive (war) files.

For example, for the UI application we ran the following command on the administrative workstation:

cd /opt/IBM/SMP/maximo/deployment/was-liberty-default
./buildmaximoui-war.sh && ./buildmaximo-xwar.sh

The deployment/maximo-ui/maximo-ui-server/apps subdirectory had following structure:

deployment/maximo-ui/maximo-ui-server/
├── apps
│   ├── maximoui.war
│   └── maximo-x.war
├── Dockerfile
├── jvm.options
└── server.xml

The server.xml file was the server descriptor, jvm.options contained the system properties to set at the JVM startup level, Dockerfile was the file used for building the image, and apps contained the build artifacts:

-rw-r--r-- 1 root root 1157149383 Mar 20 09:57 maximoui.war
-rw-r--r-- 1 root root   70932873 Mar 20 10:01 maximo-x.war

The other applications followed a similar structure. (You can see the Dockerfile in the technical procedure section.)

From the Dockerfile path on the administration workstation, we built the Docker image by running the following command:

docker build . -t icd/ui:7.6.1.0

We did the same for the other Liberty applications, so that we had the following images:

icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0

The JMS server did not come by default with Maximo Liberty support. We needed to create it from scratch. Our procedure was based on the WebSphere Application Server Liberty documentation. (You can see the example server.xml file in the Technical procedure section.)

We built the Service Portal Docker image from the Node.js image. For the Service Portal application, we copied the full application tree, the certificate, and the private key exported by the web server to allow communication between the two components. (The Dockerfile for Service Portal is included in the technical procedure section.)

Eventually, we obtained the following images:

icd/ui:7.6.1.0
icd/cron:7.6.1.0
icd/mea:7.6.1.0
icd/bros:7.6.1.0
icd/api:7.6.1.0
icd/jms:1.0.0.0
icd/sp:7.6.1.0

Our third step: Deploying the containers

For the Docker engine, we chose an Ubuntu 18.04 machine with four CPUs and 32 GB RAM, a typical size for standard SaaS architecture.

After we had our images, we started deploying containers from them. We first deployed with one container per image. Then we carried out a scalability test with two UI containers, as discussed in the results section.

We created a Docker network called ICDNet, and we added each running container to it, which allowed easy communication between all the containers.

In the end, our suitably formatted docker ps command looked like the following example:

NAME    SP
IMAGE   icd/sp:7.6.1.0
PORTS   0.0.0.0:3000->3000/tcp

NAME    CRON
IMAGE   icd/cron:7.6.1.0
PORTS   9080/tcp, 9443/tcp

NAME    UI
IMAGE   icd/ui:7.6.1.0
PORTS   0.0.0.0:9080->9080/tcp, 0.0.0.0:9443->9443/tcp

NAME    API
IMAGE   icd/api:7.6.1.0
PORTS   0.0.0.0:9081->9080/tcp, 0.0.0.0:9444->9443/tcp

NAME    MEA
IMAGE   icd/mea:7.6.1.0
PORTS   0.0.0.0:9084->9080/tcp, 0.0.0.0:9447->9443/tcp

NAME    JMS
IMAGE   icd/jms:1.0.0.0
PORTS   9080/tcp, 0.0.0.0:9011->9011/tcp, 9443/tcp

NAME    BROS
IMAGE   icd/bros:7.6.1.0
PORTS   0.0.0.0:9085->9080/tcp, 0.0.0.0:9448->9443/tcp

All resources (container, network and volumes) are created using Docker compose tool (the docker-compose.yml file is included in the technical procedure section). The YAML file adds parameters to the run command for each container, for example db host and some environment variables for configuring containers correctly.

Our technical procedure

The following example shows the Dockerfile for the Liberty-based image:

FROM websphere-liberty
USER root

# Copy the applications
COPY --chown=default:root apps /opt/ibm/wlp/usr/servers/defaultServer/apps

# Copy the server.xml and JVM options
COPY server.xml /opt/ibm/wlp/usr/servers/defaultServer/
COPY jvm.options /opt/ibm/wlp/usr/servers/defaultServer/

# install the additional utilities listed in the server.xml
RUN ["/opt/ibm/wlp/bin/installUtility","install","defaultServer"]
server.xml file for the JMS docker image
<server description="new server">
<featureManager>
<feature>servlet-3.1</feature>
<feature>wasJmsClient-2.0</feature>
<feature>wasJmsServer-1.0</feature>
<feature>jmsMdb-3.2</feature>
</featureManager>


<!-- To allow access to this server from a remote client host="*" has been added to the following element -->
<wasJmsEndpoint id="InboundJmsEndpoint" host="*"
  wasJmsPort="9011" wasJmsSSLPort="9100"/>

<!-- A messaging engine is a component, running inside a server, that manages messaging resources. Applications are connected to a messaging engine when they send and receive messages. When wasJmsServer-1.0 feature is added in server.xml by default a messaging engine runtime is initialized which contains a default queue (Default.Queue) and a default topic space(Default.Topic.Space).
If the user wants to create a new queue or topic space, then the messagingEngine element must be defined in server.xml -->
<messagingEngine>
<queue id="jms/maximo/int/queues/sqin" sendAllowed="true"
  receiveAllowed="true" maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/sqout"
  sendAllowed="true"
  receiveAllowed="true"maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/cqin" sendAllowed="true"
  receiveAllowed="true" maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/notf" sendAllowed="true"
  receiveAllowed="true" maintainStrictOrder="false"/>

<queue id="jms/maximo/int/queues/weather"
  sendAllowed="true" receiveAllowed="true"
  maintainStrictOrder="false"/>

</messagingEngine>
</server>

The following example shows the Dockerfile for the JMS Server image:

FROM websphere-liberty

COPY files/server.xml /opt/ibm/wlp/usr/servers/defaultServer/

RUN ["/opt/ibm/wlp/bin/installUtility","install","defaultServer"]

The following example shows the Dockerfile for the Service Portal image:

FROM aricenteam/aricentrepo:nodejs
USER root

# copy the serviceportal tree in the /opt/ibm/ng directory
RUN mkdir -p /opt/ibm/ng
COPY ng /opt/ibm/ng/

# copy certificate and key files
COPY server.crt /opt/ibm/ng
COPY server.key /opt/ibm/ng

EXPOSE 3000
WORKDIR /opt/ibm/ng
CMD ["node", "app.js"]
docker-compose.yml file
version: '3.7'

services:
  ui:
    image: icd:ui
    ports:
      - "9443"
    environment:
      JVM_ARGS: "-Dmxe.name=MAXIMO_UI"
    networks:
      - icd_net
    volumes:
      - doclinks:/DOCLINKS
      - search:/SEARCH
  api:
    image: icd:api
    ports:
      - "9443"
    environment:
      JVM_ARGS: "-Dmxe.name=MAXIMO_API"
    networks:
      - icd_net
    volumes:
      - doclinks:/DOCLINKS
      - search:/SEARCH
  cron:
    image: icd:cron
    ports:
      - "9443"
    environment:
      JVM_ARGS: "-Dmxe.name=MAXIMO_CRON"
    networks:
      - icd_net
    volumes:
      - doclinks:/DOCLINKS
      - search:/SEARCH
  mea:
    image: icd:mea
    ports:
      - "9443"
    environment:
      JVM_ARGS: "-Dmxe.name=MAXIMO_MEA"
    networks:
      - icd_net
    volumes:
      - doclinks:/DOCLINKS
      - search:/SEARCH
  jms:
    image: icd:jms_server
    networks:
      - icd_net

networks:
  icd_net:
    driver: bridge

volumes:
  doclinks:
  search:

Our results

After we had our Control Desk instance up and running, we took some measurements with the Rational Performance Tester tool to compare performance of a classic instance and the container-based one. We ran two different tests with a workload of 20 and 50 users and monitored the with nmon script CPU and memory of the virtual machines.

As shown in the following image, on average, we saw a larger memory consumption by the classic instance (deployed on WebSphere Application Server for Network Deployment), which was due mainly to the number of running Java process. It also included the deployment manager and node agent. On the CPU side, the behaviors overlap.

The following table shows the average, minimum, and maximum values for the page response time (PRT) as a function of time. It seemed that the Docker case performed slightly better, with an average response of ~0.1 times the one in the classic case.

image

For situations with 50 users, we also performed a scalability test by adding another UI container to the instance and see if the workload is balanced well.

The following screen capture shows the page response time (PRT) and the functions in cases of one and two UI containers. The results confirmed what was expected: the performance for two containers increased by a factor of 2, so we concluded that the instance scales with a quasi-ideal trend.

image

image

Summary

This article showed a case study using container technology to leverage the native support for WebSphere Liberty Profile with the IBM Control Desk product.

The most interesting result? We found that we could move a traditional product to containers to evolve it toward a “cloud-native”-like architecture, without any need to “touch the code”. There were no impacts at the application level.

Cloud-native applications are usually designed with a microservices pattern that allows a natural deployment on containers, but this is not the case for the classic products like Control Desk. Despite this challenge, a deep knowledge of the product itself makes the microservices pattern achievable in the deployment phase. Like we did, you can identify independent modules that “mimic” the different microservices to guarantee some level of decoupling in their interaction.

You might find a similar challenge where you cannot touch the source code. Based on the case study we described in this article, consider the following general procedure to migrate an existing product to a containers-based architecture with a microservice-like pattern:

  1. Product diagonalization: identify the independent modules in your existing application.
  2. Review your build suite: you need to re-engineer your build scripts, if not provided, to produce the identified modules in step 1 (a kind of factorization).
  3. Deployment: deploy each module on a single container (like the example steps previously described).
  4. Set up intermodule traffic: configure the different containers so they can communicate each other to guarantee the overall application capabilities.
  5. Test: verify that the new deployment architecture doesn’t introduce unexpected issues on the application, specifically focusing on performance and scaling.

We wish you the best as you move your traditional applications to a container environment. These tips and examples can guide you to make the process as painless as possible.

Maria Elena Taglieri
Stefano Cosenza
Fabio Marinetti
Leonida Gianfagna