[UPDATE]: The article has been updated for installing ICP4I 2019.3.2 on OCP 3.11

In this blog post, I will explain the steps involved in installing IBM Cloud Pak for Integration version 2019.3.1 / 2019.3.2 on Openshift 3.11.

Assumption:
Storage: Appropriate storage has been configured. For example, for on-prem installation CephRBD is recommended for API Connect, NFS is recommended for ACE & MQ and GlusterFS is recommended for AssetRepository. Similarly for Cloud-native installation, configure appropriate storage.

Note: The Red Hat OpenShift version you are using must be 3.11 if you are installing IBM Cloud Pak for Integration version 2019.3.1 or 2019.3.2. The IBM Cloud Private version you are using must be 3.2.0.1906 for 2019.3.1 and 3.2.0.1907 for 2019.3.2.

We will do the ICP4I installation from OCP Master node. If you want to do from a client machine outside the OCP cluster, make sure to configure oc and kubectl clients.
Login to OCP master with OCP cluster-admin authority having root-level access.

Login to OCP using below command:

oc login <OpenshiftURL> -u <username> -p <password>
For example OpenshiftURL is https://prod-master.fyre.ibm.com:8443

Login to OCP docker registry using below command:
docker login -u $(oc whoami) -p $(oc whoami -t)

By default docker registry url would be:
docker-registry.default.svc:5000

1) Download IBM Cloud Pak for Integration (Openshift) installable from Passport Advantage (PPA)
File name for 2019.3.1: IBM_CLOUD_PAK_FOR_INTEGRATION_201.tar.gz
File name for 2019.3.2: ibm-cloud-pak-for-integration-x86_64-2019.3.2-for-OpenShift.tar.gz

2) Extract the Installable:
Extract the files using below command:

For 2019.3.1:

tar xvf IBM_CLOUD_PAK_FOR_INTEGRATION_201.tar.gz
For 2019.3.2: tar xvf ibm-cloud-pak-for-integration-x86_64-2019.3.2-for-OpenShift.tar.gz

It will create a folder ‘installer_files’ and will extract the artifacts inside it.

3) Label Openshift master node as compute
Openshift master node(s) should be labeled as compute. You can see the OCP nodes by running the command:

oc get nodes

Run below command to label OCP master nodes as compute nodes:

sudo kubectl label nodes <OCP Master node> node-role.kubernetes.io/compute=true

Now run “oc get nodes” to verify that Openshift master node(s) has/have been labeled as compute.

4) Set vm.max_map_count
On each node, set vm.max_map_count to 1048575. This is minimum required value if you want to install API Connect, else minimum required value is 262144.

sudo sysctl -w vm.max_map_count=1048575
echo "vm.max_map_count=1048575" | sudo tee -a /etc/sysctl.conf

Use below command to verify

sudo sysctl -a | grep vm.max_map_count

You can also check using “vi /etc/sysctl.conf”

5) Patch Storage Class
If you want to make the storage class as default, you can do it else you can omit this step.

Check the storage class name by running the command:

oc get sc

For example, if you have configured glusterfs with storage class name ‘glusterfs-storage’, it will return this name. You can run below command to make this storage class default:

kubectl patch storageclass <Storage Class Name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now run “oc get sc” command to verify.

Note down this storage class name as it will be used while configuring config.yaml for installing ICP4I.

6) Verify subdomain
Get the subdomain configured in OCP cluster. Run the below command:

kubectl -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."

Note down this subdomain value as it will be used while configuring config.yaml for installing ICP4I.

7) Configure config.yaml
Configure conf.yaml file for ICP4I installation. Here we will use OCP dedicated compute nodes for configuring ICP4I master, proxy and management nodes. Attached are sample config.yaml files.

Sample config.yaml for 2019.3.1:
config.yaml

Sample config.yaml for 2019.3.2:

config.yaml

Download and unzip to see the sample config.yaml file. Notice below sections in this yaml file and update accordingly in your config.yaml:

# A list of OpenShift nodes that used to run ICP components
cluster_nodes:
master:
- prod-worker1-6.fyre.ibm.com
proxy:
- prod-worker1-7.fyre.ibm.com
management:
- prod-worker1-7.fyre.ibm.com

Here master will be the OCP compute node that you have designated for ICP master, proxy will be the OCP compute node that you have designated for ICP proxy and management will be the OCP compute node that you have designated for ICP management.

Specify the storage class name:

storage_class: glusterfs-storage

Update below section:
openshift:
console:
host: prod-master.fyre.ibm.com
port: 8443
router:
cluster_host: icp-console.9.204.169.137.nip.io
proxy_host: icp-proxy.9.204.169.137.nip.io

In 2019.3.2 yaml file, yaml anchor has been added that allows us to refer to this value later when configuring the IBM Cloud Pak for Integration Platform Navigator. So below is the value of proxy_host:

proxy_host: &proxy icp-proxy.9.204.169.177.nip.io

Here console host will be OCP master node and port will be OCP console port (default is 8443).Router cluster_host is icp-console.<subdomain that you noted down at step 6>Router proxy_host is icp-proxy.<subdomain that you noted down at step 6>

Set the password for ‘admin’ user:

default_admin_password: admin

You can set the password rule by adding below section:

password_rules:- '(.*)'

In archive-addons section for icp4i, update the hostname value for ibm-icp4i-prod to ICP proxy hostname. In sample config.yaml for 2019.3.2 it has been left to default, i.e.
hostname: *proxy

charts:
- name: ibm-icp4i-prod
pullSecretValue: image.pullSecret
values:
image:
pullSecret: sa-integration
tls:
hostname: prod-worker1-7.fyre.ibm.com # hostname of the ingress proxy to be configured
generate: true

8) Extract the images
Run below command from installer_files directory:

sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig

Now extract the images and load them into docker.
Go to installer_files/cluster/images folder. Run below command:

For 2019.3.1:

tar xf ibm-cloud-private-rhos-3.2.0.1906.tar.gz -O | sudo docker load

For 2019.3.2:

tar xf ibm-cloud-private-rhos-3.2.0.1907.tar.gz -O | sudo docker load
It will take time to extract and load images to docker.

9) Install ICP4I
Install ICP4I using below command from /cluster directory:
For 2019.3.1: sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z --security-opt label:disable ibmcom/icp-inception-amd64:3.2.0.1906-rhel-ee install-with-openshift

For 2019.3.2: sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z --security-opt label:disable ibmcom/icp-inception-amd64:3.2.0.1907-rhel-ee install-with-openshift

Installation will take some time.

10) Verify Installation
After installation is completed successfully, verify the installation.
Login to ICP Console using the url:

https://<cluster_host>/console

Login to Platform Navigator using below url:

https://<cluster_proxy>/integration

1 comment on"Installing IBM Cloud Pak for Integration on OCP 3.11"

  1. Anand,

    THanks for this article. It helps.

    Couple of questions:

    1. If we are installing all the ICP and Platform Navigator services on Worker Nodes, why do we need to label master node as “compute” node? And i was told that the best practice is to not have any additional services deployed on OC master node and all the ICP/Platform navigator load should be on the worker nodes

    2. Under ICP4I charts, you mentioned hostname: prod-worker1-7.fyre.ibm.com # hostname of the ingress proxy to be configured – SHouldn’t this be the ICP proxy you configured? icp-proxy.9.204.169.177.nip.io?? Or proxy

Join The Discussion

Your email address will not be published. Required fields are marked *