[UPDATE]: IBM Cloud Pak for Integration (CP4I) 2019.4.1, which is the latest release, is not supported on OCP 3.11. CP4I 2019.4.1 requires OCP 4.2. It is recommended to use latest release of CP4I
[UPDATE]: The article has been updated for installing ICP4I 2019.3.2 on OCP 3.11

In this blog post, I will explain the steps involved in installing IBM Cloud Pak for Integration version 2019.3.1 / 2019.3.2 on Openshift 3.11.

Storage: Appropriate storage has been configured. For example, for on-prem installation CephRBD is recommended for API Connect, NFS or GlusterFS is recommended for ACE & MQ and GlusterFS is recommended for AssetRepository. Similarly for Cloud-native installation, configure appropriate storage.

Note: The Red Hat OpenShift version you are using must be 3.11 if you are installing IBM Cloud Pak for Integration version 2019.3.1 or 2019.3.2. The IBM Cloud Private version you are using must be for 2019.3.1 and for 2019.3.2.

We will do the ICP4I installation from OCP Master node. If you want to do from a client machine outside the OCP cluster, make sure to configure oc and kubectl clients.
Login to OCP master with OCP cluster-admin authority having root-level access.

Login to OCP using below command:

oc login <OpenshiftURL> -u <username> -p <password>
For example OpenshiftURL is https://prod-master.fyre.ibm.com:8443

Login to OCP docker registry using below command:

docker login <docker registry url> -u $(oc whoami) -p $(oc whoami -t)

By default docker registry url would be:

1) Download IBM Cloud Pak for Integration (Openshift) installable from Passport Advantage (PPA)
Part number: CC40AEN

2) Extract the Installable:
Extract the files using below command:


It will create a folder ‘installer_files’ and will extract the artifacts inside it.

3) Label Openshift master node as compute
If you want to deploy IBM Cloud Private on any OpenShift master or infrastructure node, Openshift master node(s) should be labeled as compute. You can see the OCP nodes by running the command:

oc get nodes

Run below command to label OCP master nodes as compute nodes:

sudo kubectl label nodes <OCP Master node> node-role.kubernetes.io/compute=true

Now run “oc get nodes” to verify that Openshift master node(s) has/have been labeled as compute.

4) Set vm.max_map_count
On each storage node, set vm.max_map_count to 1048575. This is minimum required value if you want to install API Connect, else minimum required value is 262144.

sudo sysctl -w vm.max_map_count=1048575
echo "vm.max_map_count=1048575" | sudo tee -a /etc/sysctl.conf

Use below command to verify

sudo sysctl -a | grep vm.max_map_count

You can also check using “vi /etc/sysctl.conf”

5) Patch Storage Class
If you want to make the storage class as default, you can do it else you can omit this step.

Check the storage class name by running the command:

oc get sc

For example, if you have configured glusterfs with storage class name ‘glusterfs-storage’, it will return this name. You can run below command to make this storage class default:

kubectl patch storageclass <Storage Class Name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Now run “oc get sc” command to verify.

Note down this storage class name as it will be used while configuring config.yaml for installing ICP4I.

6) Verify subdomain
Get the subdomain configured in OCP cluster. Run the below command:

kubectl -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."

Note down this subdomain value as it will be used while configuring config.yaml for installing ICP4I.

7) Configure config.yaml
Configure conf.yaml file for ICP4I installation. Here we will use OCP dedicated compute nodes for configuring ICP4I master, proxy and management nodes. Attached are sample config.yaml files.

Sample config.yaml for 2019.3.1:

Sample config.yaml for 2019.3.2:


Download and unzip to see the sample config.yaml file. Notice below sections in this yaml file and update accordingly in your config.yaml:

# A list of OpenShift nodes that used to run ICP components
- prod-worker1-6.fyre.ibm.com
- prod-worker1-7.fyre.ibm.com
- prod-worker1-7.fyre.ibm.com

Here master will be the OCP compute node that you have designated for ICP master, proxy will be the OCP compute node that you have designated for ICP proxy and management will be the OCP compute node that you have designated for ICP management.

Specify the storage class name:

storage_class: glusterfs-storage

Update below section:
host: prod-master.fyre.ibm.com
port: 8443
cluster_host: icp-console.
proxy_host: icp-proxy.

In 2019.3.2 yaml file, yaml anchor has been added that allows us to refer to this value later when configuring the IBM Cloud Pak for Integration Platform Navigator. So below is the value of proxy_host:

proxy_host: &proxy icp-proxy.

Here console host will be OCP master node and port will be OCP console port (default is 8443).Router cluster_host is icp-console.<subdomain that you noted down at step 6>Router proxy_host is icp-proxy.<subdomain that you noted down at step 6>

Set the password for ‘admin’ user:

default_admin_password: admin

You can set the password rule by adding below section:

password_rules:- '(.*)'

In archive-addons section for icp4i, update the hostname value for ibm-icp4i-prod to ICP proxy hostname. In sample config.yaml for 2019.3.2 it has been left to default, i.e.
hostname: *proxy

- name: ibm-icp4i-prod
pullSecretValue: image.pullSecret
pullSecret: sa-integration
hostname: icp-proxy. # hostname of the ingress proxy to be configured
generate: true

8) Extract the images
Run below command from installer_files directory:

sudo cp /etc/origin/master/admin.kubeconfig cluster/kubeconfig

Now extract the images and load them into docker.
Go to installer_files/cluster/images folder. Run below command:

For 2019.3.1:

tar xf ibm-cloud-private-rhos- -O | sudo docker load

For 2019.3.2:

tar xf ibm-cloud-private-rhos- -O | sudo docker load
It will take time to extract and load images to docker.

9) Install ICP4I
Install ICP4I using below command from /cluster directory:
For 2019.3.1: sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z --security-opt label:disable ibmcom/icp-inception-amd64: install-with-openshift

For 2019.3.2: sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z --security-opt label:disable ibmcom/icp-inception-amd64: install-with-openshift

Installation will take some time.

10) Verify Installation
After installation is completed successfully, verify the installation.
Login to ICP Console using the url:


Login to Platform Navigator using below url:


11 comments on"Installing IBM Cloud Pak for Integration on OCP 3.11"

  1. I installed openshift 3.11 on AWS with all-in-one node setup. I then followed your instruction to install IC4I.

    When I ran the install command, it hung at mongodb section. Any idea? thanks.

    TASK [waitfor : include_tasks] *************************************************
    task path: /installer/playbook/roles/waitfor/tasks/main.yaml:8
    included: /installer/playbook/roles/waitfor/tasks/mongodb.yaml for localhost

    TASK [waitfor : Waiting for MongoDB to start] **********************************
    task path: /installer/playbook/roles/waitfor/tasks/mongodb.yaml:8
    EXEC /bin/bash -c ‘( umask 77 && mkdir -p “` echo /tmp/ansible-tmp-1573330913.79-145177559719213 `” && echo ansible-tmp-1573330913.79-145177559719213=”` echo /tmp/ansible-tmp-1573330913.79-145177559719213 `” ) && sleep 0’
    Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
    PUT /root/.ansible/tmp/ansible-local-25oYcOHK/tmpQ5Pj7k TO /tmp/ansible-tmp-1573330913.79-145177559719213/command.py
    EXEC /bin/bash -c ‘chmod u+x /tmp/ansible-tmp-1573330913.79-145177559719213/ /tmp/ansible-tmp-1573330913.79-145177559719213/command.py && sleep 0’
    EXEC /bin/bash -c ‘/usr/bin/python2 /tmp/ansible-tmp-1573330913.79-145177559719213/command.py && sleep 0’
    EXEC /bin/bash -c ‘rm -f -r /tmp/ansible-tmp-1573330913.79-145177559719213/ > /dev/null 2>&1 && sleep 0’
    FAILED – RETRYING: Waiting for MongoDB to start (100 retries left).
    FAILED – RETRYING: Waiting for MongoDB to start (99 retries left).

  2. Hina Purohit November 07, 2019

    Hi Anand,
    Thanks a lot for this article.
    We have deployed cloudpak for Integration Platform navigator. However, we are facing an issue :
    after we open the UI https://hostname/integration .The UI opens for a second and then suddenly we get Network Authentication required error. Would you be able to suggest any resolution for the issue.

  3. Hi Anand, I am trying to install ICP4I at OCP 3.11 using 2019.3.1 package. But when I run the installation, it stuck at TASK [rbac-config : Creating rbac roles] step. The error message is error: Couldn’t get available api versions from server: Get https://ocp:8443/api?timeout=32s: x509: certificate signed by unknown authority.

    Any idea about this?

    Thank you.

  4. Hi Anand.. Great article… The steps and details that you need to deploy on ocp v3, but what happen when have you a ocp v41? The new version dont use docker like the registry, they use o recomend the redhat core os.. Why dont release this cloud Pak for ocp v4 too…

    What are the benefits to deploy ibm cloud pak on ocp vxx Nodes? Can I managed the ace instances, the mq instances from the ocp console? Or I just to login on the mini ibm cloud private with the integration navigator…

    I will try to install on ocp v4 🙂


  5. Anand

    I just completed the installation of 3.2 and there seems to be an issue with the part number CC40AEN I downloaded from Software Acesss Catalog

    Basically two errors.

    One error is within the installable docker image which we have no visibility to unless we exec into docker image and go through the installer script script

    FATAL Error:

    fatal: [localhost]: FAILED! =>
    msg: |-
    The task includes an option with an undefined variable. The error was: ‘charts_packages’ is undefined

    The error appears to have been in ‘/installer/playbook/roles/chart-upload/tasks/main.yaml’: line 27, column 3, but may
    be elsewhere in the file depending on the exact syntax problem.

    The offending line appears to be:

    – name: Uploading chart packages to helm repo
    ^ here

    And we believe that this is causing bad gateway error

    fatal: [localhost]: FAILED! => changed=true
    attempts: 5
    cmd: /usr/local/platform-api/cloudctl catalog load-archive –archive “/installer/cluster/icp4icontent/IBM-App-Connect-Enterprise-for-IBM-Cloud-Pak-for-Integration-2.1.0.tgz” –registry docker-registry.default.svc:5000/ace –repo “local-charts”
    delta: ‘0:02:12.407059’
    end: ‘2019-10-21 14:34:27.785450’
    msg: non-zero return code
    rc: 1
    start: ‘2019-10-21 14:32:15.378391’
    stderr: ”
    stdout: |-
    Expanding archive

    Returned status 502 Bad Gateway, error:
    502 Bad Gateway

    502 Bad Gateway


    Any help is appreciated

  6. Anand,

    THanks for this article. It helps.

    Couple of questions:

    1. If we are installing all the ICP and Platform Navigator services on Worker Nodes, why do we need to label master node as “compute” node? And i was told that the best practice is to not have any additional services deployed on OC master node and all the ICP/Platform navigator load should be on the worker nodes

    2. Under ICP4I charts, you mentioned hostname: prod-worker1-7.fyre.ibm.com # hostname of the ingress proxy to be configured – SHouldn’t this be the ICP proxy you configured? icp-proxy. Or proxy

  7. Great article!!, few questions:
    If I have 3 masters, 3 infra, 3 compute nodes in my OCP cluster, what would be the value for the below? In this context, what is then the point of having such an OCP cluster for HA with a load balancer on top of master (say lb1.master.nip.io) and another load balancer (say wildcard dns *. on top of infra?


    – prod-worker1-6.fyre.ibm.com


    – prod-worker1-7.fyre.ibm.com


    – prod-worker1-7.fyre.ibm.com

    Secondly, in the charts, you mention “hostname: prod-worker1-7.fyre.ibm.com # hostname of the ingress proxy to be configured”, which I assume is the proxy ingress for the navigator chart; in the context of multiple master and infra nodes with their respective load balancers (as explained above), what would then be the value of hostname?

    • AnandAwasthi October 21, 2019

      Hi Abu,
      You can use your OpenShift master and infrastructure nodes here, or install these components to dedicated OpenShift compute nodes. You can specify more than one node for each type to build a high availability cluster. Use the command oc get nodes to obtain these values.
      Please refer to https://www.ibm.com/support/knowledgecenter/SSGT7J_19.3/install/install_red_hat.html for more details
      Under icp4i chart, hostname is the hostname of the ingress proxy to be configured. In this case it should be “icp-proxy.”. I have corrected this value, thanks for pointing this out.


Join The Discussion

Your email address will not be published. Required fields are marked *