Deploy Sterling Order Management on Azure Red Hat OpenShift

Introduction

IBM Sterling Order Management is an omnichannel solution handling order, inventory, reverse logistics, delivery management, and overall supply-chain collaboration. This solution is available in a Certified Container edition delivered in a Continuous Delivery CI/CD model with pre-defined deployment patterns. The containers are validated with the state-of-the-art IBM Kubernetes certification for security compliance and consistency with emerging standards for cloud deployment. These containers can be deployed on any cloud: public or private and are compatible with industry-leading tools like the Red Hat OpenShift Container Platform.

IBM GSI labs worked with Infosys to onboard a customer on the latest 10.0 version of Sterling Order Management on Microsoft’s Azure cloud platform. The best practices and guidance derived from that deployment are captured in this comprehensive tutorial.

Architecture overview

IBM Sterling Order Management containers are delivered as three images — om-base, om-app, and om-agent — through the IBM Entitled Registry using licensed API keys that enable customers with an easier pull access to their local registries or CI/CD pipelines. The deployment charts are readily available in the Red Hat OpenShift Helm Catalog:

  • om-app — Order Management application server image handling synchronous traffic patterns embedded with IBM WebSphere Liberty application server
  • om-agent — Order Management workflow agent and integration server container to handle asynchronous traffic patterns
  • om-base — Base image provisioned on IBM Cloud Container Registry (Image Registry) and enabled for adding product extensions/customizations to create a customized image

The following diagram depicts a high-level architecture used for deploying in Azure OpenShift.

Image shows high-level architecture

Below are some of the considerations that need to be kept in mind for a production-ready deployment in Azure OpenShift:

  • It is recommended that IBM Db2 and IBM MQ be deployed outside the OpenShift cluster on Azure Virtual machines for better data storage patterns and for an elevated performance profile. This design adds efficiencies in portability between on-premises, private cloud, and public cloud footprints.
  • NFS share is used for Persistent Volume storage for the pods. Azure NetApp is used as Network File Storage (NFS).
  • Azure NetApp is an Azure component that needs to be procured separately to create an NFS share. Azure NetApp supports the NFS 4.1 protocol, which is recommended for IBM MQ.
  • Custom images are deployed using the in-built Order Management helm charts from the OpenShift Helm Catalog.
  • Customized application, agent, and integration servers will be deployed as pods in the OpenShift cluster. Clients will access these pods through OpenShift routes.

Prerequisites

  1. Create an Azure Red Hat OpenShift (ARO) Cluster.
  2. Procure Azure NetApp and create NFS mount.
  3. Install Db2 on VM server outside of OpenShift cluster
  4. Install MQ on VM server outside of OpenShift cluster.
  5. Install Docker on VM build server.
  6. Copy the Helm binary to build server.
  7. Install Helm charts from the OpenShift console, provided by IBM. For further information please refer to Implement IBM Sterling Order Management Helm Chart using Red Hat OpenShift 4.6.
  8. Download the latest IBM Order Management images from the image repository with the IBM entitlement key using the Obtaining container images instructions.
  9. Create image-pull secret. This is required for connecting to Azure Container Registry to pull the images as part of Helm deployment. Give the secret name, image registry URL, user ID, and password. Image shows pull secret
  10. Execute the following command for linking the secret with service account: oc secrets link default <secret-name> --for=pull.
  11. Set up the oc command utility on the build server.
  12. Once ARO cluster setup is complete, log in to the ARO console. Image shows command-line tools
  13. Download oc client for Windows/Linux by clicking the respective link as shown below. Image shows CLI
  14. Unzip the archive and you should see the oc client executable.

Estimated time

Estimated execution time: 4-6 hours

Steps

High-level flow

Image shows high-level flow

Step 1. ARO cluster setup

  1. If you have multiple Azure subscriptions, specify the relevant subscription ID: az account set --subscription <subscription-id>.
  2. Register the Microsoft Red Hat OpenShift resource provider: az provider register -n Microsoft.RedHatOpenShift --wait.
  3. Register the Microsoft.Compute resource provider: az provider register -n Microsoft.Compute --wait.
  4. Register the Microsoft.Storage resource provider: az provider register -n Microsoft.Storage --wait.
  5. Create a Resource Group for ARO.
  6. Create a VNET for ARO.
  7. Add an empty subnet for the master nodes and worker nodes.
  8. Disable subnet private endpoint policies on the master subnet. This is required to be able to connect and manage the cluster:
                            ```az network vnet subnet update \
                            --name <master subnet name> \
                            --resource-group <resource group name> \
                            --vnet-name <vnet name> \
                            --disable-private-link-service-network-policies true```
    
  9. Create ARO cluster (replace resource group name, VNET name, master subnet name, worker subnet name, domain name):
                           ```az aro create \
                            --resource-group <resource group name> \
                            --name  <aro-cluster name> \
                            --vnet <vnet name> \
                            --master-subnet <master subnet name> \
                            --worker-subnet <worker subnet name> \
                            --pull-secret @pull-secret.txt \
                            --domain  <domain name for prod>  \
                            --master-vm-size Standard_D8s_v3 \
                            --worker-vm-size Standard_D16s_v3 \
                            --worker-count 5 \
                            --worker-vm-disk-size-gb 1024 \
                            --apiserver-visibility Private \
                            --ingress-visibility Private```
    

Step 2. Build process

  1. Download the om-base image from IBM Cloud Container Registry using the entitlement key.
  2. Explode the container from base image.
  3. Update the sandbox.cfg file with database host, port, database name, and schema within the container.
  4. Run the setup files command.
  5. Copy the customization and create custom JAR for Java code.
  6. Build the resources JAR and entities JAR.
  7. Generate new images.
  8. Load the images and create a tag for new images.
  9. Push the custom images to Azure Container Registry:
    docker run -e LICENSE=accept --privileged -v <shared file   system directory path>:/opt/ssfs/shared -it --name  <container name> <image>
    docker  exec -it  <containerid>  bash
    
  10. Update the sandbox.cfg files under /opt/ssfs/runtime/properties.
  11. Execute ./setupfiles.sh.
  12. Copy the required custom XSL and XML.
  13. Build resource JAR: ./deployer.sh -t resourcejar
  14. Copy all the required Java classes: ./install3rdParty.sh <classes> 1 -j /opt/ssfs/shared/<classes.jar> -targetJVM EVERY
  15. Copy the Extensions.xml.
  16. Build entities.jar: ./deployer.sh -t entitydeployer
  17. Generate app, agent images:
    ./generateImages.sh --MODE=app,agent --DEV_MODE=true
    
  18. Load images, tag images, and push images to registry: docker load –i om-app_10.0.tar.gz docker load –i om-agent_10.0.tar.gz docker tag <imageid> <registryname>: <tagname> docker push <registryname>: <tagname>

Step 3. Deployment process

From the OpenShift Console, create a project. Image shows create project

Manage security constraints by providing access for users/groups to the service account (for example: oc adm policy add-scc-to-user anyuid system:serviceaccount:<namespace>:default).

Create Global secret for data source connectivity details as mentioned in the README.

Create Role and Role Bindings, which are used to create role-based access control for the default service account with the namespace. Refer to the README for more information.

Create Persistent Volume (PV) and Persistent Volume Cleaim (PVC) — the storage provisioned for the application and, respectively, the claim for storage provisioned for the application. For this implementation, the NFS file storage is used to create the PV and PVC. Below is the sample YAML used to create the PV and PVC:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: oms-qa-pv
spec:
  capacity:
  storage: 10Gi
nfs:
  server: <IP address>
  path: <Path to NFS>
accessModes:
  - ReadWriteMany
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
kind: PersistentVolumeClaim
apiVersion: v1
metadata:  
  name: oms-qa-ibm-oms-pro-prod-oms-common  
spec:
 accessModes:
   - ReadWriteMany
 resources:
   requests:
     storage: 10Gi
 volumeName: oms-qa-pv
 storageClassName: 
 volumeMode: Filesystem

Create Azure Container Registry Secret, as mentioned in the prerequisite section.

Edit values.yaml file with appsecret, db properties, customer overrides properties, agent, app tags, and image registry properties.

Helm install – Use this command to deploy the pods to cluster: helm install --debug <namespace> -f <path to values.yaml> <release-name>

Helm upgrade – Use this command to update the pods with new changes: helm upgrade <namespace> -f <path to values.yaml> <release-name>

Note: Set datasetup.loadFactoryData to install for the first time to run the datasetup job. Once the Helm install is executed and the data setup pod is complete, set it to donotinstall or blank, so that the datasetup job isn’t invoked.

Set datasetup.fixPack.loadFPFactoryData to install and datasetup.fixPack.installedFPNo to 0 for initial installation only.

Step 4. Post-deployment activities

Single sign-on

Single sign-On is implemented using Azure AD and using SAML tokens. An ACS (Assertion Consumer Service URL), also referred to as Reply URL is also configured in the IDP – this is the URL, where the application expects to receive the SAML token, and usually the OMS Home page. Implement SSO in Sterling OMS by setting the right properties in customer overrides section of values.yaml and implementing the SingleSign On class, which would convert the SAMLResponse to XML. Ensure that the login ID returned in the SAML response is already configured as a User ID in Sterling Order Management for the user to be authenticated.

CI/CD pipeline

CI/CD DevOps is implemented by creating Jenkins jobs for CDT import/export and build/deploy image.

SSL certificates

Below are the steps to be followed for any outbound external system integration from OMS:

  1. Copy certificate to build server.
  2. Execute rsync to copy the certificate from build server to the appserver pod: oc rsync <sourcedir> <podname>: <sharedpath>
  3. Connect to pod through terminal session.
  4. Go to NFS mount shared path. The openssl command is used to convert certificatetype to pem from cer: openssl x509 -in <cert>.cer -outform PEM -out <cert>.pem
  5. Copy the .pem file to shared path.
  6. Set permissions to the .pem file and restart appserver pod.

External domain

An OpenShift route is a way to expose a service by giving it an externally reachable hostname. Routes created with external domain would be used by all the inbound interfaces that will access Order Management applications. The same route should be created for production and a Disaster Recovery (DR) instance so that in case of a disaster recovery incident, all external systems will access the same URL without any change. Only change would be the DNS switch-over from PROD to DR IP.

Image shows oc route

Azure Load Balancer configuration for Db2 clustering

In Azure for IBM Db2, HA Cluster Setup Virtual IP (VIP) Configurations will not work, as virtual IPs are not accessible over network. Azure Load Balancer should be created, and additional configurations are needed. All traffic from the Order Management application to DB would flow through the Azure Load Balancer. Ensure that the Load Balancer IP is set to be the same as DB VIP.

Azure Load Balancer works in Active-Active mode, whereas IBM Db2 is intended to work in Active-Passive mode. Azure Load Balancer could redirect traffic to any DB node, as ports are up in both DB nodes. To ensure that Db2 remains passive on one node, configure a dummy port on both the nodes from OS front. Meanwhile, configure the back-end script to bring up the port on PRIMARY node and bring down the port on STANDBY node. Health probe port on Load Balancing is set to dummy port, so that it can route the traffic to only one node on which the port is up.

Check out the Azure product documentation for more information.

Summary

This tutorial detailed how to deploy IBM Sterling Order Management as containers on Azure Red Hat OpenShift cluster. It also specified the special considerations to be factored into the design of the deployment model for optimal performance. This tutorial covered important pre-installation steps that ensure a smooth installation, as well as post-deployment infrastructure tasks that enable flawless authentication, CI/CD devops practices, and robust load-balancing configurations.

Next steps

Refer to post-deployment tasks and other tasks for developing and deploying the custom code in containers. IBM’s community of partners are leading the charge on crafting best practices and reusable implementation patterns, and feeding it into Order Management, online documentation, and developer community blogs. Please share your feedback and inputs to improve the efficacy of this tutorial.