Win $20,000. Help build the future of education. Answer the call. Learn more

Deploy containerized applications on Red Hat OpenShift for IBM Power Systems

This tutorial is part of the Learning path: Deploying Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers.

Topics in “Advanced scenarios” Type
Securely access IBM Cloud services from Red Hat OpenShift Container Platform deployed on IBM Power Systems Virtual Server Tutorial
Securing Red Hat OpenShift Container Platform 4.x clusters and web-based deployments using IBM Power Systems Virtual Server Tutorial
Backing up etcd data from a Red Hat OpenShift Container Platform cluster to IBM Cloud Object Storage Tutorial
Change worker node count on a deployed Red Hat OpenShift Container Platform 4.x cluster on IBM Power Systems Virtual Servers Tutorial
Configure access to a Red Hat OpenShift cluster on a private network in IBM Power Systems Virtual Server Tutorial
Deploy containerized applications on Red Hat OpenShift for IBM Power Systems Tutorial
Use IBM Cloud Application Load Balancer for VPC and IBM Cloud DNS Services with Red Hat OpenShift Tutorial
Set up Red Hat Advanced Cluster Management for Kubernetes on IBM Power Systems Virtual Server Tutorial

IBM Power Architecture is a highly reliable platform capable of processing large quantities of data effectively. The Linux on IBM Power Systems platform is similar to Linux on any other platform. IBM Power supports major enterprise and community distributions of Linux.

Traditionally, software development and subsequent containerization have occurred on the Intel (x86_64) architecture. When making the applications available on alternate architectures such as, Power (ppc64le), a common question is how different the alternate architecture is when it comes deploying the containerized application. In the case of ppc64le, the answer is: There are no differences. This makes deploying a containerized application on the Power Architecture a straightforward task with no platform-specific dependencies. This tutorial captures the essence of how to make a containerized application available for alternative architecture like ppc64le, when compared to an existing application available for x86_64 architecture.

Introduction

The application used in this tutorial consists of MongoDB and Node.js components running in two separate containers. The application code is written in Node.js and runs as part of the Node.js container, whereas, the data for the application is served through the MongoDB container. The Node.js container interacts with the database container to serve the requests.

The subsequent sections in this tutorial provide details about the files used to deploy both the application and database containers on an Intel platform. After reading through this tutorial, users would be able to appreciate that there are no deviations in steps required to build or deploy the container images on Red Hat OpenShift Container Platform running either on x86_64 or ppc64le.

Cluster topology description

The infrastructure consists of controller nodes, worker nodes, and a helper (bastion) node along with a sample application. We have the OpenShift Container Platform for Power (ppc64le) installed and configured on IBM Power Systems Virtual Server as per the instructions at https://ocp-power-automation.github.io/. For the Intel (x86_64) architecture, a similar topology was used to install OpenShift Container Platform.

A typical topology (consisting of OpenShift Container Platform) on IBM Cloud looks similar to the one shown in Figure 1.

Figure 1. Overview of an application installed on OpenShift Container Platform available on IBM Cloud

Figure 1

Image credits: https://cloud.ibm.com/docs/openshift?topic=openshift-vpc_rh_tutorial

Existing software stack on Intel (x86_64) – OpenShift Container Platform

The software stack includes:

  • Red Hat Universal Base Image (UBI) version 7 or 8
  • MongoDB
  • Node.js
  • Node modules:
    • express
    • mongoose
    • async
    • ejs
    • body-parser
    • passport
    • passport-http
    • router
    • mongodb (driver)

Application deployment on OpenShift running on x86_64/ppc64le

This two-tier application consisting of open source components (MongoDB and Node.js) interacts with the geospatial workload.

The container images already exists for both the architectures. Also a multi-arch manifest has been created for the container images.

A typical multi-arch manifest along with container image looks like the one shown in Figure 2, on Docker Hub.

Figure 2. Overview of a multi-arch manifest and container image

Figure 2
View larger image

Perform the following steps to deploy the application on an x86_64 or a ppc64le architecture cluster:

  1. Clone the cloud_interop repository to your server.

    cd $HOME/
    git clone https://github.com/ocp-Power-demos/cloud_interop 
    cd cloud_interop
    
  2. Create a new project.

    oc new-project ibm -description="IBM ISDL" --display-name="ibm"
    oc project ibm
    
  3. Deploy the required service and the deployment configuration files using the following commands:

    oc create -f mong-service.yaml 
    oc create -f node-service.yaml
    oc create -f mong-deployment.yaml 
    oc create -f node-deployment.yaml
    
  4. To check the status of the pods in the required namespace, run the following command:

    $ oc get po -n ibm
    NAME                    READY     STATUS    RESTARTS   AGE
    mong-6f6cbff4fb-q4ds6   1/1       Running   0          1m
    node-b6f55bdb9-rpszs    1/1       Running   0          1m
    [root@p230n134 cloud_interop]#
    

    Where -n ibm is our targeted namespace.

  5. Create a secure route to access your application.

    $ oc get svc
    NAME                    READY     STATUS    RESTARTS   AGE
    NAME   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
    mong   ClusterIP   172.30.135.8     <none>        27017/TCP   1m
    node   ClusterIP   172.30.126.228   <none>        3000/TCP    1m
    
    $ oc create route edge --service=node
    route.route.openshift.io/node created
    

So far, you have deployed the solution and created a secure route to access your application from outside your OpenShift Container Platform cluster. In order to access your newly deployed application and interact with it, add the path at the end of your URL/route.

For example, /api/getInspectionsByZipCodeIteration/10100/10150/1, would fetch inspections carried between areas with pin codes 10100 and 10150 using Node.js APIs and MongoDB.

The application could also be deployed from the GUI console of the OpenShift Container Platform cluster.

As you can see, there is no difference in the deployment of containerized applications on the OpenShift cluster.

Now let’s go through a scenario where we want to make a containerized application available on Power. The individual components are available for Power but container images do not exist, and must be built.

We’ll use the same two-tier application consisting of Node.js and MongoDB that we used earlier to show the build and deployment steps.

Software stack validation

Two major components of the containerized application include:

  • Node.js
  • MongoDB community/Enterprise binaries

Node.js is supported on the Linux on IBM Power Systems platform as mentioned at https://nodejs.org/en/download/.

Figure 3. High-level diagram

Figure 3

The MongoDB enterprise version is available on the Linux on IBM Power Systems platform as mentioned at https://www.mongodb.com/try/download/enterprise . You need to select a ppc64le distribution of MongoDB.

Figure 4. Select ppc64le architecture in the MongoDB website

Figure 4
View larger image

For this scenario, we built a container image consisting of a specific version of MongoDB and Node.js.

Package availability check, build, and installation

This section provides a side-by-side comparison matrix of build and deploy steps for ppc64le and x86_64.

Installation and build tasks

In this comparison, let us consider Node.js as the software to build and install.

Table 1. Package availability check, build, install
Task x86_64 ppc64le
Check package availability Node.js website and upstream repository Node.js website and OSPAT, upstream repository
Build Node.js, if not available $ ./configure
$ make -j4
$ ./configure
$ make -j4
Install Node.js $ [sudo] make install $ [sudo] make install

Building a container image

In this comparison, let us consider a sample Dockerfile consisting of Node.js and MongoDB. There are no differences in the way the container image is built on ppc64le compared to x86_64.

The Dockerfile when built results into a container image which is based on CentOS along with Node.js and MongoDB.

$

FROM centos
ENV NODE_ENV production

RUN curl -o /etc/yum.repos.d/mongodb-enterprise.repo https://repo.mongodb.com/yum/redhat/mongodb-enterprise-testing.repo

RUN yum update -y
RUN yum install mongodb-enterprise-server -y \
 && yum install mongodb-enterprise-shell -y \
 && yum install mongodb-enterprise-tools -y \
 && yum install nodejs -y

#
# Check if Node is working

RUN node --version

# Define mountable directories
VOLUME ["/data/db"]

# Define working directory
WORKDIR /data

# Define default command
CMD ["mongod"]

EXPOSE 27017
EXPOSE 5000

$

Building a container image on ppc64le architecture

$ arch
ppc64le
$ docker build -t mongodb_ppc64le .
Sending build context to Docker daemon   2.56kB
Step 1/11 : FROM centos
 ---> b5f502b6c313
Step 2/11 : ENV NODE_ENV production
 ---> Using cache
 ---> 656efdabdba0
Step 3/11 : RUN curl -o /etc/yum.repos.d/mongodb-enterprise.repo https://repo.mongodb.com/yum/redhat/mongodb-enterprise-testing.repo
 ---> Using cache
 ---> 90588f27d1dc
Step 4/11 : RUN yum update -y
 ---> Using cache
 ---> 6033775f4b07
Step 5/11 : RUN yum install mongodb-enterprise-server -y  && yum install mongodb-enterprise-shell -y  && yum install mongodb-enterprise-tools -y  && yum install nodejs -y
 ---> Using cache
 ---> 9f65e590c99d
Step 6/11 : RUN node --version
 ---> Using cache
 ---> 9d7eebb3b52f
Step 7/11 : VOLUME ["/data/db"]
 ---> Using cache
 ---> fa540c8c8f5e
Step 8/11 : WORKDIR /data
 ---> Using cache
 ---> 6ce9635569e0
Step 9/11 : CMD ["mongod"]
 ---> Using cache
 ---> c5580188c606
Step 10/11 : EXPOSE 27017
 ---> Using cache
 ---> 26afdb38ccdd
Step 11/11 : EXPOSE 5000
 ---> Using cache
 ---> 8c29e6a32917
Successfully built 8c29e6a32917
Successfully tagged mongodb_ppc64le:latest
$

Building a container image on x86_64 architecture

$ arch
x86_64
$ docker build -t mongodb_x86_64 .
Sending build context to Docker daemon  3.072kB
Step 1/11 : FROM centos
 ---> 300e315adb2f
Step 2/11 : ENV NODE_ENV production
 ---> Using cache
 ---> ef257252d811
Step 3/11 : RUN curl -o /etc/yum.repos.d/mongodb-enterprise.repo https://repo.mongodb.com/yum/redhat/mongodb-enterprise-testing.repo
 ---> Using cache
 ---> f2aab64d33f8
Step 4/11 : RUN yum update -y
 ---> Using cache
 ---> 1a3645d60cf0
Step 5/11 : RUN yum install mongodb-enterprise-server -y  && yum install mongodb-enterprise-shell -y  && yum install mongodb-enterprise-tools -y  && yum install nodejs -y
 ---> Using cache
 ---> b313ba7a63ac
Step 6/11 : RUN node --version
 ---> Using cache
 ---> 2c9ea7daceec
Step 7/11 : VOLUME ["/data/db"]
 ---> Using cache
 ---> 1e3e3ba64834
Step 8/11 : WORKDIR /data
 ---> Using cache
 ---> 31911e502ba0
Step 9/11 : CMD ["mongod"]
 ---> Using cache
 ---> b2b643d16e1c
Step 10/11 : EXPOSE 27017
 ---> Using cache
 ---> 796d9b462ff3
Step 11/11 : EXPOSE 5000
 ---> Using cache
 ---> d94bc6c1cdc7
Successfully built d94bc6c1cdc7
Successfully tagged mongodb_x86_64:latest
$

Deploying a sample application on OpenShift Container Platform

In this comparison, let us consider a sample application installation. There are no differences in the way the application is deployed on ppc64le.

Table 2. Deploying an application on OpenShift Container Platform
Task x86_64 ppc64le
Clone the GitHub repository git clone https://github.com/ocp-Power-demos/cloud_interop git clone https://github.com/ocp-Power-demos/cloud_interop
Create a new project $ oc new-project ibm -description="IBM ISDL" --display-name="ibm"
$ oc project ibm
$ oc new-project ibm -description="IBM ISDL" --display-name="ibm"
$ oc project ibm
Deploy the application $ oc create -f mong-service.yaml
$ oc create -f node-service.yaml
$ oc create -f mong-deployment.yaml
$ oc create -f node-deployment.yaml
$ oc create -f mong-service.yaml
$ oc create -f node-service.yaml
$ oc create -f mong-deployment.yaml
$ oc create -f node-deployment.yaml
Check the application status $ oc get po -n ibm
NAME READY
STATUS RESTARTS AGE
mong-6f6cbff4fb-q4ds6 1/1 Running 0 1m
node-b6f55bdb9-rpszs 1/1 Running 0 1m
$ oc get po -n ibm
NAME READY
STATUS RESTARTS AGE
mong-6f6cbff4fb-q4ds6 1/1 Running 0 1m
node-b6f55bdb9-rpszs 1/1 Running 0 1m
Create a route $ oc create route edge --service=node route.route.openshift.io/node created $ oc create route edge --service=node route.route.openshift.io/node created

Conclusion

This tutorial described how you can build and deploy a containerized application available on both Intel (x86_64) and Power (ppc64le) architectures while using the same steps on both the architectures.

This tutorial assumed that the application components exist for Power. For scenarios where application components are not available on Power and requires porting, refer to the following Learning path: Port your open source applications to Linux on Power.

Appendix

mong-deployment.yaml (a deployment file to install MongoDB)

apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: mong
    name: mong
spec:
    replicas: 1
    selector:
        matchLabels:
            app: mong
            service: mong
    template:
        metadata:
            labels:
                app: mong
                service: mong
        spec:
            containers:
                -
                    name: mong
                    image: docker.io/mithunhr/dbmongo:latest
                    imagePullPolicy: IfNotPresent
                    ports:
                    - containerPort: 27017
                    restartPolicy: Always

mong-service.yaml (a service file to install MongoDB)

apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    service: mong
  name: mong
spec:
  ports:
  - name: "27017"
    port: 27017
    targetPort: 27017
  selector:
    service: mong
status:
  loadBalancer: {}

node-deployment.yaml (a deployment file to install Node.js)

apiVersion: apps/v1
kind: Deployment
metadata:
    labels:
        name: node
    name: node
spec:
    replicas: 1
    selector:
        matchLabels:
            app: node
            service: node
    template:
        metadata:
            labels:
                app: node
                service: node
        spec:
            containers:
                -
                    name: node
                    image: docker.io/mithunhr/appnode:latest
                    imagePullPolicy: IfNotPresent
                    ports:
                    - containerPort: 3000
                    restartPolicy: Always

node-service.yaml (a service file to install Node.js)

apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    service: node
  name: node
spec:
  ports:
  - name: "3000"
    port: 3000
    targetPort: 3000
  selector:
    service: node
status:
  loadBalancer: {}