Skill Level: Any Skill Level

Architects, Developers, Infrastrcture Admins

This recipe is only meant for Dev/Demo environment setup on IBM cloud VMs. Recommendation for production environment is to deploy the respective Cloud pak from IBM Cloud catalog on IBM Managed Redhat openshift Kubernetes service (ROKS).


  1. Access to IBM Cloud Classic infrastructure
  2. Active Redhat Openshift container platform subscription
  3. Access and entitlement for the IBM Cloud Pak for Integration v2019.3.2


  1. Introduction

    IBM Cloud Pak for Integration (ICP4I) helps you connect anything using industry-leading capabilities with the most comprehensive integration platform on the market. In this recipe we will cover the following:

    1. Provision the infrastructure required to satisfy the minimum system requirements to install ICP4I on Redhat openshift.

    2. Install Redhat Openshift container platform (OCP) on the provisioned infrastructure/VMs.

    3. Install ICP4I v2019.3.2 on the provisioned Openshift cluster.

    4. Uninstallation of ICP4I

    5. Destroy the the environment – Openshift cluster as well as infrastructure

    It will take approx 5-6 hrs to complete steps 1-3 above.

  2. System Requirements

    Review the Minimum system requirements for ICP4I and based on the capabilities you want to deploy, you can come up with the number of nodes in the cluster and their configuration.


    In this recipe we will provision the following configuration which is good to deploy all capabilities (disk storgae can be expanded post install).


  3. Provision the infrastructure

    We will use Lesson 1 and 2 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat to provision the infrastructure on IBM cloud. Also, refer Planning RedHat OpenShift Deployment on IBM Cloud for more details.

    1. Take a VM anywhere (local desktop or cloud), with any operating system and install docker on this VM. Let’s call this as jump server in this recipe.
    2. After docker is installed Run “docker pull ibmterraform/terraform-provider-ibm-docker”
    3. Run “docker run -it ibmterraform/terraform-provider-ibm-docker:latest”
    4. Run “apk add –no-cache openssh”
    5. Run “git clone https://github.com/IBM-Cloud/terraform-ibm-openshift.git”
    6. Run “cd terraform-ibm-openshift”
    7. Run “ssh-keygen -t rsa -b 4096 -C “test123@gmail.com”
    8. vi variables.tf and use sample variables.tf but update the following values
    9. Retrieve your IBM Cloud classic infrastructure user name and API key as this will be prompted later.
    10. Run “make rhn_username=<your_rhn_username> rhn_password=<your_rhn_password> infrastructure” This step will take approx 40 mins to complete.

    After step 10, you should see the required VMs created in the IBM cloud classic infrastructure. Ensure the following before moving forward:

    1. Login to Bastion node and validate the ‘manage_etc_hosts’ flag is False in the file /etc/cloud/cloud.cfg. If this is True, then change this to False, save and exit.
    2. As per https://bugzilla.redhat.com/show_bug.cgi?id=1749024, for RHEL 7.7, kernal version should be kernel-3.10.0-1062.el7 for OCP install. Since the above VMs are configured with RHEL 7.7 we need to ensure the kernal version is appropriate for OCP install. 
      • Run “uname -r” to validate the kernal version on bastion node, if it doesn’t match “3.10.0-1062.x” then execute “yum update” on bastion node
      • Use this step as a workaround and perform yum update followed by reboot on all nodes if you hit the following error (The installed kernel version does not meet the required minimum for RHEL 7.7) during Openshift install:
      • kernal
    3. Ensure network manager settings as follows, this is to ensure Network manager configuration is intact post VM restart to avoid any issues
      1. Open /etc/sysconfig/network-scripts/ifcfg-eth0 to ensure the settings are similar as beloweth0
      2. Open /etc/sysconfig/network-scripts/ifcfg-eth1 to ensure the settings are similar as beloweth1
    4. Reboot bastion node
    5. Repeat steps 1-4 on each node.
  4. Deploy Redhat openshift container platform

    Now we have the infrastructure ready for the deployment of OCP. We will use Lesson 3 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat for this step.

    1. Login to bastion node
    2. Run “subscription-manager unregister”
    3. Run “rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release”
    4. Run “subscription-manager register –serverurl subscription.rhsm.redhat.com:443/subscription –baseurl cdn.redhat.com –username <your_redhat_username> –password <your_redhat_password>”
    5. Run “subscription-manager list –available –matches ‘*OpenShift Container Platform*'” and note down the Pool ID.
    6. Exit from bastion node
    7. Login to jump server and locate the docker container used to provision the infrastructure.
      1. Run “docker ps” to find the container id
      2. Run “docker exec -it <container id> bash” to get inside the containerdocker-ps
      3. Run “cd terraform-ibm-openshift”
      4. Run “make rhn_username=<your_rhn_username> rhn_password=<your_rhn_password> pool_id=<pool_ID> rhnregister” this will take approx 10 mins
      5. Run “make openshift” this will take approx 2 hrs.
      6. After successful completion of step 5, OCP cluster should be up and running
      7. To access the cluster, in your local /etc/hosts file make an entry as follows
  5. Set up users and authentication for your OpenShift cluster

    By default the above OCP install uses HTPasswd as identity provider. It also creates a user ‘admin’ with password ‘test123’. In this step we will assign cluster administrator role to this user.

    1. Login to master node
    2. Run “oc login -u system:admin”
    3. Run “oc adm policy add-cluster-role-to-user cluster-admin admin”
    4. Now access the Openshift console https://master_public_ip:8443/console and login with user ‘admin’ and password ‘test123’ and make sure you are able to login and browse through the UI without any issues
    5. You can run through Lesson 4 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat to ensure everything is working fine.
  6. Preparing for IBM Cloud pak for Integration installation

    Now that we have OCP cluster with desired configuration ready, we need to prepare for the installation of ICP4I.

    Expand the disk storage

    We have provisioned each node with 100GB boot disk space but for ICP4I install we need ~120GB so to meet this pre-req, we need to add additional storage. Follow these steps to add a 150GB SAN disk to the master node.

    1. Login to IBM Cloud Classic infrastructure and resize the master node to add 150GB SAN disk to it.
    2. Login to master node (ssh) and make sure the disk is available by running “lsblk”. Output should show the new disk as xvdc 202:32 0 150G 0 disk
    3. Perform disk partition as mentioned in the link  https://codingbee.net/rhcsa/rhcsa-creating-partitions

      • fdisk /dev/xvdc
      • Add a new partition
      • write table to disk and exit
      • Run “lsblk” to ensure output shows as
        • xvdc 202:32 0 150G 0 disk
          └─xvdc1 202:33 0 150G 0 part
    4. Run “mkfs.xfs  /dev/xvdc1”
    5. Run “partprobe”
    6. Create a new directory under /var e.g. /var/app
    7. Mount the disk to the new dir, Run “mount -o defaults,noatime /dev/xvdc1 /var/app”

    Now we have added additional 150GB disk storage to master node.

    Prepare the nodes

    Prepare the nodes before starting the installation as follows, refer https://developer.ibm.com/integration/blog/2019/09/25/installing-ibm-cloud-pak-for-integration-on-ocp-3-11/ for more details:

    1. Label the master node as compute, Run “sudo kubectl label nodes <OCP Master node> node-role.kubernetes.io/compute=true”
    2. On each node, set vm.max_map_count to 1048575 (if you want to install API Connect)
      • Run “sudo sysctl -w vm.max_map_count=1048575”
      • Run “echo “vm.max_map_count=1048575″ | sudo tee -a /etc/sysctl.conf”
  7. Install IBM cloud pak for Integration

    Now we are good to proceed with installation of ICP4I. Refer https://developer.ibm.com/integration/blog/2019/09/25/installing-ibm-cloud-pak-for-integration-on-ocp-3-11/ for details on each of the steps below.

    1. Login to master node
    2. Run “cd /var/app” and download IBM Cloud Pak for Integration for Openshift v2019.3.2 installable from Passport Advantage (PPA) under this dir.
    3. Run “tar xvf <archive_name>”  e.g. tar xvf ibm-cloud-pak-for-integration-x86_64-2019.3.2-for-OpenShift.tar.gz. It will create a folder ‘installer_files’ and will extract the artifacts inside it.
    4. Run “oc get sc” and take a note of the storage class name
    5. Take a note of subdomain, Run “kubectl -n openshift-console get route console -o jsonpath='{.spec.host}’| cut -f 2- -d “.”” and copy the output.
    6. Run “oc get nodes” and take note of the name of the nodes
    7. Run “cd installer_files”
    8. Run “sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig”
    9. Run “cd cluster/images”
    10. Run “tar xf ibm-cloud-private-rhos- -O | sudo docker load” – this will take approx 30mins
    11. Configure the config.yaml file under installer_files/cluster folder and use sample config.yaml file for reference and update as outlined below and make sure node names here should match the node names produced by the “oc get nodes” command. config-yaml
    12. Make sure you are in installer_files/cluster folder
    13. Run “sudo docker run -t –net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z –security-opt label:disable ibmcom/icp-inception-amd64: install-with-openshift” this will take 2 hrs to complete, so take a break!!
  8. Verify the installation of IBM Cloud pak for integration

    After installation is completed successfully, verify the installation.

  9. Uninstall IBM Cloud pak for Integration

    To uninstall IBM Cloud pak for integration follow the below steps:

    • Login to master node
    • Run “sudo docker run –privileged -ti –net=host -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception-amd64: uninstall-with-openshift”
    • Restart Docker on each node, run “service docker restart”
    • Remove the additional labels applied to nodes
      • kubectl label node <master-node> node-role.kubernetes.io/icp-master-

         kubectl label node <proxy-node> node-role.kubernetes.io/icp-proxy-

        kubectl label node <management-node> node-role.kubernetes.io/icp-management-

    • Restart all nodes in the cluster
  10. Destroy the environment - Cluster as well as VMs

    • Login to jump server and get inside the docker container as outlined in step 4.7 in this recipe
    • Run “make destroy” – this will take approx 1hr
  11. Conclusion

    In this recipe we covered:

    1. Provision infrastructure on IBM Cloud
    2. Install Openshift cluster on the provisioned infrastructure
    3. Install IBM Cloud pak for Integration on Openshift cluster
    4. Uninstall IBM Cloud pak for Integration
    5. Uninstall Openshift cluster and infrastructure







Join The Discussion