Skill Level: Beginner


This recipe will cover the installation procedure of IBM Cloud Pak for Multicloud management 1.2 on Openshift 3.11 on IBM Cloud VSI (VMs) from infrastructure provisioning to deployment of Cloud pak along with MCM supporting Capabilty deployment.


1. Access to IBM Cloud Classic infrastructure

2. Redhat Network (RHN) Id/password for Openshift installation

3. Access to IBM Cloud Pak for Multicloud Management installables.


  1. System Requirements

    Refer https://www.ibm.com/support/knowledgecenter/en/SSFC4F_1.2.0/install/hardware_reqs.html for guidance on System requirements. This recipe creates infrastructure with the following configuration which is sufficient to run most capabilities in IBM Cloud Pak for Multicloud Management (CP4MCM) for Demo/POC purpose.

     Node Number of nodes vCPU Memory Disk space Remarks
    Bastion 1 4 16 Boot disk 100GB aka boot node
    Master 1 16 32 Boot disk 100GB + 200GB SAN disk Openshift cluster master ndoe
    Infra 1 16 32 Boot disk 100GB + 200GB SAN disk Openshift cluster infra/proxy ndoe
    App 3 16 32 Boot disk 100GB + 200GB SAN disk Openshift cluster app/compute node
    Storage 3 8 32 Boot disk 100GB GlusterFS storage nodes
  2. Provision Infrastructure

    Follow Lession 1 and 2 in https://cloud.ibm.com/docs/terraform?topic=terraform-redhat#configure procedure to bring up the nodes based on configuration mentioned in Step 1 on IBM cloud. Make sure the following:

    1. Use Docker image for Terraform and the IBM Cloud Provider plug-in v0.21.0 or earlier (ibmterraform/terraform-provider-ibm-docker:v0.21.0) which is compatible with the terraform scripts used in the above documentation.

    2. Use the variables.tf for reference and customize as appropriate.

    Expand storage

    Infra provisioned with the reference variables.tf above will provision the nodes with 100GB boot disk so we need to attach additional 200GB storage for the purpose of docker on all Openshift nodes. Resize all nodes (Bastion, Master, Infra, App) and attach 200GB additional disk, create new partition and mount the new disk to /var/lib/docker dir on all nodes.

    Additional considerations

    1. Login to Bastion node and validate the ‘manage_etc_hosts’ flag is False in the file /etc/cloud/cloud.cfg. If this is True, then change this to False, save and exit.

    2. Ensure network manager settings to ensure Network manager configuration is intact post VM restart to avoid any issues

    Open /etc/sysconfig/network-scripts/ifcfg-eth0 and make sure NM_Controlled=yes and BOOTPROTO=static save and exit.

    Open /etc/sysconfig/network-scripts/ifcfg-eth1 and make sure NM_Controlled=yes and BOOTPROTO=static save and exit.

    3. Reboot the node.

    Ensure #1, #2 and #3 on all nodes.

  3. Deploy Openshift

    Once the infrastructure is provisioned and storage for docker is expanded and other configuration changes has been completed, proceed with Lesson 3 of https://cloud.ibm.com/docs/terraform?topic=terraform-redhat#configure to deploy Openshift container platform (OCP) 3.11.

    Once openshift deployment is complete, make sure the cluster is functioning as desired and Openshift console is accessible.

  4. Preparing for Installation of CP4MCM

    We will use master node to kick-start the installation of CP4MCM and Openshift CLI, kube CLI are already installed on master node. Ensure the following before starting with CP4MCM installation:

    1. Ensure that the admission webhooks are enabled on the OpenShift Container Platform master node.

    2. Make a note of the storage class provisioned by running ‘oc get sc’ e.g. glusterfs-storage.

    3. Offline or Online install:

    There are 2 ways to install CP4MCM

    1. Offline installation: Using the offline archive for installation (downloaded from Passport Advantage)

    2. Online installation: Using the IBM Cloud entitlement registry for installation

    In this recipe we will use the Online installation. Obtain your entitlement key as mentioned here.

  5. Installing CP4MCM

    Login to master node and perform the following:

    1. Login to entitlement registry: docker login cp.icr.io –username ekey –password <entitlement_key>
    Do not change the username (ekey) in the above command, just replace the entitlement_key with your own key.

    2. Create docker registry secret: oc create secret docker-registry entitled-registry –docker-server=cp.icr.io –docker-username=ekey –docker-password=<entitlement_key> –docker-email=unused

    3. Pull the installer image from the entitled registry by running the following command:

    docker pull cp.icr.io/cp/icp-foundation/mcm-inception:3.2.3

    4. Create an installation directory on the master node:

    mkdir /opt/ibm-multicloud-manager-1.2 ; cd /opt/ibm-multicloud-manager-1.2

    5. Extract the cluster directory:

    sudo docker run –rm -v $(pwd):/data:z -e LICENSE=accept –security-opt label:disable cp.icr.io/cp/icp-foundation/mcm-inception:3.2.3 cp -r cluster /data

    6. Copy the OpenShift admin.kubeconfig file to the cluster directory. 

    sudo cp /etc/origin/master/admin.kubeconfig /opt/ibm-multicloud-manager-1.2/cluster/kubeconfig

    7. Update the config.yaml file in /opt/ibm-multicloud-manager-1.2/cluster/ folder


    7.a Run oc get nodes to fetch the node names to be specified in config.yaml file

    7.b Run oc get sc to fetch the storage class name

    7.c You may review the remaining params and make updates as appropriate

    8. Deploy the cloud pak by running the command

    docker run -t –net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v /var/run:/var/run:z -v /etc/docker:/etc/docker:z –security-opt label:disable cp.icr.io/cp/icp-foundation/mcm-inception:3.2.3 install-with-openshift

  6. Verify the installation

    After the installation is complete, MCM hub can be accessed at https://<Cluster Master Host>:<Cluster Master API Port> url. The values of  <Cluster Master Host> and <Cluster Master API Port> are defined in ibmcloud-cluster-info ConfigMap in the kube-public namespace. Username and password for login to MCM hub are as defined in config.yaml file.

    For more details refer https://www.ibm.com/support/knowledgecenter/SSFC4F_1.2.0/installer/3.2.2/cluster_endpoints.html#master


  7. Prepare for Cloud Automation Manager (CAM) installation

    Cloud Automation Manager in IBM Multicloud Manager is used to create and edit templates and services that implement common business patterns and to deploy them in your cloud environment. After they are deployed, you can manage and access the instances from the Cloud Automation Manager user interface. Prepare for CAM installation as mentioned below:

    • Install the folllowing CLI tools from https://<MCM console url>/common-nav/cli on any system of your choice
      • IBM Cloud Pak CLI
      • CAM installation requires installing the respective Helm chart. If you wish to install the helm chart through CLI, install the Helm CLI
      • Kubernetes CLI (Optional)
    • Ensure ibm-charts; URL: https://raw.githubusercontent.com/IBM/charts/master/repo/stable/
      and local-charts; URL: https://<your mcm cluster host>:443/helm-repo/charts respositories are available in list of Helm repositories and Sync Helm respository.
    • We will install CAM in services namespace, so create Docker Image Pull Secret for pulling images from IBM Entitled Registry in services namespace.
      • oc create secret docker-registry entitled-registry –docker-server=cp.icr.io –docker-username=ekey –docker-password=<entitlement_key> –docker-email=unused -n services
    • Add default pod security policy
      • oc adm policy add-scc-to-user ibm-anyuid-hostpath-scc system:serviceaccount:services:default
    • Generate a deployment ServiceID API Key from the system where Cloud Pak CLI is installed
      • export serviceIDName=’service-deploy’
        export serviceApiKeyName=’service-deploy-api-key’
        cloudctl login -a <ibm_cloud_pak_mcm_console_URL> –skip-ssl-validation -u <ibm_cloud_pak_mcm_admin_id> -p <ibm_cloud_pak_mcm_admin_password> -n services
        cloudctl iam service-id-create ${serviceIDName} -d ‘Service ID for service-deploy’
        cloudctl iam service-policy-create ${serviceIDName} -r Administrator,ClusterAdministrator –service-name ‘idmgmt’
        cloudctl iam service-policy-create ${serviceIDName} -r Administrator,ClusterAdministrator –service-name ‘identity’
        cloudctl iam service-api-key-create ${serviceApiKeyName} ${serviceIDName} -d ‘Api key for service-deploy’
      • Save the output of the service-api-key-create command as this will be used in the Helm chart configuration.
  8. Install Cloud Automation Manager Helm chart

    • Login to MCM console and go to Catalog
    • Search for CAM and select ibm-cam


    • Click on Configure
    • Specify the release name, target namespace, and target cluster


    • In the Quick start section specify the name of the entitlement registry secret  and the output of the service-api-key-create command created in pre-requisites section


    • Based on whether you want to enable persistance or not for various components, specify the storage class or Persistent volume claim (PVC) for storageconfig_sc
    • Review the other config values and optionally update as appropriate. Refer docs for more details
    • Click install and wait for sometime for deployment to be ready
  9. Verify CAM install

    You can monitor the status of the pods with the following command:
    kubectl get -n services pods

    Once all the pods are in running state, go to the Helm releases and locate the cam helm release.


    Click on the CAM helm release and then click on the launch button to launch the CAM UI.


    CAM installation is complete and the UI should launch successfully.



  10. Conclusion

    In this recipe we have learnt how to bring up the IBM Cloud Pak for Multicloud Management 1.2 on Openshift 3.11 on IBM Cloud Classic infrastructure and also how to deploy Cloud Automation Manager (CAM) to the MCM environment.




Join The Discussion