Overview

Skill Level: Intermediate

In this recipe we will learn how to install IBM Cloud Pak for Integration (CP4I) 2019.4 on Openshift Container Platform 4.2

Ingredients

Below are the prerequisites for installing IBM Cloud Pak for Integration 2019.4:

https://www.ibm.com/support/knowledgecenter/SSGT7J_19.4/install/sysreqs.html

1) Redhat Openshift Container Platform 4.2 on Linux® 64-bit

2) CP4I common services and diferent integration capabilities have certain file system and storage requirements. File storage with 'RWO + RWX' and Block storage with RWO mode is required. Openshift Container Storage (OCS) can be deployed to provide both of these types of storage, which is backed by Ceph. You can follow below article to deploy OCS on Openshift 4.2. This recipe assumes that both types of storage i.e. File (RWO + RWX) and Block (RWO) are available and respective storage classes have been configured on OCP.

https://blog.openshift.com/deploying-your-storage-backend-using-openshift-container-storage-4/

3) Other than OCP master and worker nodes, an infra node has been provisioned with public IP address, which has access to OCP cluster nodes and allow to access the deployed services from outside. We would use this node as jump-box. You should have root level access on jump box

4) Determine the size of your cluster keeping in mind:

             - The workload size you expect to run

             - The integration capabilities that you expect to run in High Availability or Single instance mode

             - The Common Services, Asset Repository and Operations Dashboard requirements

             - Scalability requirements

Note that this recipe is only to provide guidance for deploying CP4I 2019.4 on OCP 4.2. It does not cover the aspects for deploying the platform in production environment. 

Step-by-step

  1. Validate prerequisites and OCP cluster

    Login to infra node (or Boot node as the case may be) and check if oc tool is installed. If oc tool is not installed, follow below steps:

    In OCP console, click on ‘Command line tools’

    Screen-Shot-2020-02-16-at-2.29.00-PM

    Click on ‘Download oc’

    Screen-Shot-2020-02-16-at-2.30.48-PM

    After downloading the file ‘oc.tar.gz’, extract it using below command, give appropriate permission and move to /usr/bin directory

    tar xzvf oc.tar.gz
    chmod 755 oc
    mv oc /usr/bin

    Now login to OCP cluster using oc tool.

    oc login --server=<OCP api server> -u <ocp admin user> -p <password>

    For example: oc login –server=https://api.prod3.os.fyre.ibm.com:6443 -u admin -p admin

    You may also login by getting login command with generated token. To get the login command, login to OCP console and click on ‘Copy login command’

    Screen-Shot-2020-02-16-at-1.56.21-PM

    Click on ‘Display Token’

    Screen-Shot-2020-02-16-at-1.59.12-PM

    Copy the login command with token

    Screen-Shot-2020-02-16-at-2.00.41-PM

    Login to OCP using this login command

    Screen-Shot-2020-02-16-at-2.04.07-PM

    By default, the OpenShift Container Platform registry is secured during cluster installation so that it serves traffic through TLS. Unlike previous versions of OpenShift Container Platform, the registry is not exposed outside of the cluster at the time of installation.

    Instead of logging in to the OpenShift Container Platform registry from within the cluster, you can gain external access to it by exposing it with a route. This allows you to log in to the registry from outside the cluster using the route address, and to tag and push images using the route host.

    Run below command in single line to expose the OCP registry:

    oc patch configs.imageregistry.operator.openshift.io/cluster 
    --patch '{"spec":{"defaultRoute":true}}' --type=merge

    Use the below command to get the OCP registry route:

    oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'

    Use below command to check if File and Block storage classes are available to use:

    oc get sc

    Screen-Shot-2020-02-16-at-2.23.18-PM

    Run below command to verify that all OCP nodes are in ‘Ready’ state

    oc get nodes
  2. Install docker on jump box and configure access to OCP registry

    You need a version of Docker that is supported by OpenShift installed on your jump box / boot node. All versions of Docker that are supported by OpenShift are supported for the boot node. Only Docker is currently supported.

    Run below steps to install docker:

    yum install docker -y
    systemctl start docker
    systemctl enable docker

    Check the docker status

    systemctl status docker

    If your OCP registry is using self-signed certificates, then you would not be able to access to do ‘docker login’ unless you add the certificate. Note that these steps are not required for installing CP4I however if you are planning to pull/push images to/from OCP registry that uses self signed certificate from outside OCP cluster, follow below steps to configure certificate on client mahcine.

    Navigate to /etc/docker/certs.d and create a folder with name as external url of registry. If ‘certs.d’ folder doesn’t exist, then create it. The name of external url of registry can be found using below command

    oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}'

    Create the directory inside /etc/docker/certs.d

    mkdir default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com

    Screen-Shot-2020-02-16-at-3.16.44-PM

    Navigate inside this direcory and run below command in single line to pull the certificate

    ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts 
    -connect <external url for OCP registry>) -scq > ca.crt

    For example:

    ex +'/BEGIN CERTIFICATE/,/END CERTIFICATE/p' <(echo | openssl s_client -showcerts 
    -connect default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com:443) -scq > ca.crt

    Restart the docker service.

    Now validate that you are able to login to OCP registry using below command:

    docker login <OCP registry url> -u $(oc whoami) -p $(oc whoami -t)

    For example:

    docker login default-route-openshift-image-registry.apps.prod3.os.fyre.ibm.com -u $(oc whoami) -p $(oc whoami -t)

  3. Download the CP4I installable

    The base product installation creates an instance of the Platform Navigator, along with the common services. All of the other components are optional and immediately available to install through the Platform Navigator. The entire product and all components run within a required Red Hat OpenShift Container Platform environment.

    You have the following choices for installing IBM Cloud Pak for Integration. All downloads are available from IBM Passport Advantage.

    • Download the base product and all component packages. This method can be used in air-gapped environments.
    • Download the base product only. All other component packages reside in the online IBM Entitled Registry. Execute the installation procedures to install on a Red Hat OpenShift Container Platform. This method requires internet access but saves time
  4. Configure Cluster configuration file

    Change to the installer_files/cluster/ directory. Place the cluster configuration files (admin.kubeconfig) in the installer_files/cluster/ directory. Rename the file kubeconfig. This file may reside in the setup directory used to create the cluster. If it is not available, you can log into the cluster as admin using oc login then issue the following command.

    oc config view --minify=true --flatten=true > kubeconfig

    View the kubeconfig file. If your cluster is using self-signed certificates, it may give you hard time in tls verification and installation may fail. You may update this file as per below example to skip tls verification.

    apiVersion: v1
    clusters:
    - cluster:
    insecure-skip-tls-verify: true
    server: https://api.prod3.os.fyre.ibm.com:6443
    name: api-prod3-os-fyre-ibm-com:6443
    contexts:
    - context:
    cluster: api-prod3-os-fyre-ibm-com:6443
    namespace: default
    user: admin/api-prod3-os-fyre-ibm-com:6443
    name: default/api-prod3-os-fyre-ibm-com:6443/admin
    current-context: default/api-prod3-os-fyre-ibm-com:6443/admin
    kind: Config
    preferences: {}
    users:
    - name: admin/api-prod3-os-fyre-ibm-com:6443
    user:
    token: klI928FXCt-0Va8lI2h7VFLN_mwCbyIuaQa_lJ_mM8M
  5. Configure installation environment

    Extract the contents of the archive with a command similar to the following.

    tar xzvf ibm-cp-int-2019.4.x-offline.tar.gz

    Load the images into Docker. Extracting the images might take a few minutes.

    tar xvf installer_files/cluster/images/common-services-armonk-x86-64.tar.gz -O|docker load
  6. Configure your cluster

    You need to configure your cluster by modifying the installer_files/cluster/config.yaml file. You can use your OpenShift master and infrastructure nodes here, or install these components to dedicated OpenShift compute nodes. You can specify more than one node for each type to build a high availability cluster. After using oc login, use the command oc get nodes to obtain these values. Note that you would likely want to use a worker node.

    Open the config.yaml in an editor.

    vi config.yaml

    Update the below sections in config.yaml. Below is an example:

    cluster_nodes:
    master:
    - worker3.prod3.os.fyre.ibm.com
    proxy:
    - worker4.prod3.os.fyre.ibm.com
    management:
    - worker4.prod3.os.fyre.ibm.com

    Specify the Storage Class. You can specify separate storage class for storing log data. Below is an example:

    # This storage class is used to store persistent data for the common services
    # components
    storage_class: rook-ceph-cephfs-internal

    ## You can set a different storage class for storing log data.
    ## By default it will use the value of storage_class.
    # elasticsearch_storage_class:

    Specify password for admin user and also specify password rule, e.g.

    default_admin_password: admin
    password_rules:
    # - '^([a-zA-Z0-9\-]{32,})$'
    - '(.*)'

    Leave rest of the file unchanged unless you want to change the namespaces for respective integration capabilities. Save the file.

    The value of the master, proxy, and management parameters is an array and can have multiple nodes. Due to a limitation from OpenShift, if you want to deploy on any master or infrastructure node, you must label the node as an OpenShift compute node with the following command:

    oc label node <master node host name/infrastructure node host name> node-role.kubernetes.io/compute=true

    This only needs to be done if you want the OpenShift master node and Kubernetes master node to be the same.

  7. Install CP4I

    Once preparation completes, run the installation command from the same directory containing the config.yaml file. You can use the command docker images | grep inception to see the value used to install.

    Run below command in single line

    sudo docker run -t --net=host -e LICENSE=accept -v $(pwd):/installer/cluster:z -v 
    /var/run:/var/run:z -v /etc/docker:/etc/docker:z --security-opt label:disable
    ibmcom/icp-inception-amd64:3.2.2 addon

    This process transfers the product packages from the boot node to the cluster registry. This can take several hours to complete.

    Once installation is complete, Platform navigator will be available at below endpoint:

    https://ibm-icp4i-prod-integration.<openshift apps domain>/

    You can use below command to get the OCP apps domain:

    oc -n openshift-console get route console -o jsonpath='{.spec.host}'| cut -f 2- -d "."

    Screen-Shot-2020-02-16-at-5.05.03-PM

     

    You can navigate to Openshift console and Cloud Pak foundation by clicking on the hamburger menu

    Screen-Shot-2020-02-16-at-5.07.19-PM

    Note that if you want to use ‘Operations Dashboard’ for your integration components, you should first provision ‘Operations Dashboard’ instance so that you can reference to it while creating instance of an integration capability. 

  8. Conclusion

    In this recipe we have learnt installation steps for IBM Cloud Pak for Integration on OCP 4.2.

10 comments on"Deploying IBM Cloud Pak for Integration 2019.4 on OCP 4.2"

  1. Hi, i installed cloud pak but it came with errors. Not sure ho wt fix the issue

    fatal: [localhost]: FAILED! => changed=true
    attempts: 5
    cmd: bash /tmp/config-cloudctl-script
    delta: ‘0:00:00.771608’
    end: ‘2020-05-21 08:50:52.903297’
    invocation:
    module_args:
    _raw_params: bash /tmp/config-cloudctl-script
    _uses_shell: true
    argv: null
    chdir: null
    creates: null
    executable: /bin/bash
    removes: null
    stdin: null
    warn: false
    msg: non-zero return code
    rc: 1
    start: ‘2020-05-21 08:50:52.131689’
    stderr: ”
    stderr_lines:
    stdout: |-
    Authenticating…
    Error response from server. Status code: 403; message: Error 403 : Access Forbidden

    FAILED
    Set ‘CLOUDCTL_TRACE=true’ for details
    stdout_lines:
    regards
    Jacques

    • Anand.Awasthi May 21, 2020

      Hi Jacques,
      Can you confirm below?
      1) CP4I version
      2) OCP version
      3) Is it online install or offline install?
      4) What steps have you followed and at which step did you get the error?

  2. hi
    1. 2020.1.1
    2. 4.3.5
    3. offline install
    4. I followed offical ibm website and your steps above.
    It runs the whole install and at the end it fails with the above error.

    also pods are not starting
    metering-dm-77688ddfbb-2t7cq 0/1 Init:0/2 0 33m
    metering-mcmui-8b9dccd47-blnqk 0/1 Init:0/2 0 33m
    metering-reader-628s7 0/1 Init:0/2 0 33m
    metering-reader-b8892 0/1 Init:0/2 0 33m
    metering-reader-srrc8 0/1 Init:0/2 0 33m
    metering-reader-xlpxt 0/1 Init:0/2 0 33m
    metering-reader-zt444 0/1 Init:0/2 0 33m
    metering-ui-7768bb5754-tqb7r 0/1 Init:0/2 0 33m
    secret-watcher-696c55ff8-hl955 0/1 CreateContainerConfigError 0 44m
    security-onboarding-2wknl 0/1 CreateContainerConfigError 0 44m
    logging-elk-filebeat-ds-5t2x9 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-69lkl 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-8qfpj 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-c6ktw 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-dqdsr 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-jfbwp 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-kcx75 0/1 CrashLoopBackOff 11 33m
    logging-elk-filebeat-ds-kpgwt 0/1 CrashLoopBackOff 11 33m
    logging-elk-kibana-5fc99ff86c-qrv6l 1/2 CrashLoopBackOff 13 33m
    logging-elk-kibana-init-ld7bh 0/1 CreateContainerConfigError 0 33m

  3. Containers:
    filebeat:
    Container ID: cri-o://48d649400678fc339a510b9135dcd1ae62c700bcd9e28e276d5af504043bb7cf
    Image: image-registry.openshift-image-registry.svc:5000/ibmcom/icp-filebeat-oss:6.6.1
    Image ID: image-registry.openshift-image-registry.svc:5000/ibmcom/icp-filebeat-oss@sha256:47990f7f16a6209b615b92da4975b1fd9a3532710157cdd932b3c6207f3a2900
    Port:
    Host Port:
    State: Waiting
    Reason: CrashLoopBackOff
    Last State: Terminated
    Reason: Error
    Exit Code: 1
    Started: Thu, 21 May 2020 13:15:57 +0000
    Finished: Thu, 21 May 2020 13:15:58 +0000
    Ready: False
    Restart Count: 7
    Limits:
    memory: 256Mi
    Requests:
    memory: 64Mi
    Liveness: exec [sh -c ps aux | grep ‘[f]ilebeat’ || exit 1] delay=0s timeout=1s period=30s #success=1 #failure=3
    Readiness: exec [sh -c ps aux | grep ‘[f]ilebeat’ || exit 1] delay=10s timeout=1s period=10s #success=1 #failure=3

    Events:
    Type Reason Age From Message
    —- —— —- —- ——-
    Normal Scheduled default-scheduler Successfully assigned kube-system/logging-elk-filebeat-ds-gwh7l to ip-10-106-126-24.eu-west-1.compute.internal
    Normal Pulled 13m (x5 over 14m) kubelet, ip-10-106-126-24.eu-west-1.compute.internal Container image “image-registry.openshift-image-registry.svc:5000/ibmcom/icp-filebeat-oss:6.6.1” already present on machine
    Normal Created 13m (x5 over 14m) kubelet, ip-10-106-126-24.eu-west-1.compute.internal Created container filebeat
    Normal Started 13m (x5 over 14m) kubelet, ip-10-106-126-24.eu-west-1.compute.internal Started container filebeat
    Warning BackOff 4m45s (x53 over 14m) kubelet, ip-10-106-126-24.eu-west-1.compute.internal Back-off restarting failed container

  4. Name: logging-elk-kibana-init-vlxh5
    Namespace: kube-system
    Priority: 0
    Node: ip-10-106-126-41.eu-west-1.compute.internal/10.106.126.41
    Start Time: Thu, 21 May 2020 13:04:58 +0000
    Labels: app=logging-elk-elasticsearch
    chart=ibm-icplogging
    component=kibana
    controller-uid=71bacbea-ceda-41c1-924f-51a949b98a79
    heritage=Tiller
    job-name=logging-elk-kibana-init
    release=logging
    role=kibana-init
    Annotations:
    Status: Pending
    IP: 10.129.2.56
    IPs:
    IP: 10.129.2.56
    Controlled By: Job/logging-elk-kibana-init
    Containers:
    init:
    Container ID:
    Image: image-registry.openshift-image-registry.svc:5000/ibmcom/curl:4.2.0-build.2.1
    Image ID:
    Port:
    Host Port:
    Command:
    /opt/entry/entrypoint.sh
    State: Waiting
    Reason: CreateContainerConfigError
    Ready: False

  5. any update?

  6. so i managed to resolve this issue my self.

  7. Rajesh@455 June 30, 2020

    Hi Team,

    I am getting below error while installing CP4I 2019.4.1.1 on OCP 4.2

    TASK [addon : Installing cert-manager chart] ***********************************
    task path: /installer/playbook/roles/addon/tasks/install.yaml:27
    ESTABLISH LOCAL CONNECTION FOR USER: root
    EXEC /bin/bash -c ‘( umask 77 && mkdir -p “` echo /tmp/ansible-tmp-1593526106.28-86945377657884 `” && echo ansible-tmp-1593526106.28-86945377657884=”` echo /tmp/ansible-tmp-1593526106.28-86945377657884 `” ) && sleep 0’
    Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
    PUT /root/.ansible/tmp/ansible-local-18a3whZC/tmpy6463L TO /tmp/ansible-tmp-1593526106.28-86945377657884/command.py
    EXEC /bin/bash -c ‘chmod u+x /tmp/ansible-tmp-1593526106.28-86945377657884/ /tmp/ansible-tmp-1593526106.28-86945377657884/command.py && sleep 0’
    EXEC /bin/bash -c ‘/usr/bin/python2 /tmp/ansible-tmp-1593526106.28-86945377657884/command.py && sleep 0’
    EXEC /bin/bash -c ‘rm -f -r /tmp/ansible-tmp-1593526106.28-86945377657884/ > /dev/null 2>&1 && sleep 0’
    fatal: [localhost]: FAILED! => changed=true
    cmd: |-
    filename=”/addon/ibm-cert-manager-3.4.0.tgz”
    if ! helm status –tls cert-manager &>/dev/null; then
    helm install –tls –timeout=600 –name=cert-manager –namespace=cert-manager -f .addon/cert-manager/values-install.yaml $filename
    ret=$?
    else
    echo “This chart has been installed, skip this version” && exit 0
    fi

    if [[ $ret -ne 0 ]]; then
    tiller_pod=$(kubectl -n kube-system get pods -l app=helm,name=tiller -o jsonpath=”{.items[0].metadata.name}”)
    sleep 5
    kubectl -n kube-system logs $tiller_pod |& tail -n 100
    else
    kubectl -n kube-system patch configmap addon-deploy-status –patch ‘{“data”: {“cert-manager.revision”: “none”}}’
    fi
    exit $ret
    delta: ‘0:00:06.479249’
    end: ‘2020-06-30 14:08:32.913863’
    invocation:
    module_args:
    _raw_params: |-
    filename=”/addon/ibm-cert-manager-3.4.0.tgz”
    if ! helm status –tls cert-manager &>/dev/null; then
    helm install –tls –timeout=600 –name=cert-manager –namespace=cert-manager -f .addon/cert-manager/values-install.yaml $filename
    ret=$?
    else
    echo “This chart has been installed, skip this version” && exit 0
    fi

    if [[ $ret -ne 0 ]]; then
    tiller_pod=$(kubectl -n kube-system get pods -l app=helm,name=tiller -o jsonpath=”{.items[0].metadata.name}”)
    sleep 5
    kubectl -n kube-system logs $tiller_pod |& tail -n 100
    else
    kubectl -n kube-system patch configmap addon-deploy-status –patch ‘{“data”: {“cert-manager.revision”: “none”}}’
    fi
    exit $ret
    _uses_shell: true
    argv: null
    chdir: /installer/cluster
    creates: null
    executable: /bin/bash
    removes: null
    stdin: null
    warn: false
    msg: non-zero return code
    rc: 1
    start: ‘2020-06-30 14:08:26.434614’
    stderr: ‘Error: remote error: tls: bad certificate’
    stderr_lines:
    stdout: |-
    2020/06/30 11:44:08 using configured ciphersuites [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384]
    [main] 2020/06/30 11:44:08 Starting Tiller v2.12.3+icp (tls=true)
    [main] 2020/06/30 11:44:08 GRPC listening on :44134
    [main] 2020/06/30 11:44:08 Probes listening on :44135
    [main] 2020/06/30 11:44:08 Storage driver is ConfigMap
    [main] 2020/06/30 11:44:08 Max history per release is 5
    stdout_lines:

    PLAY RECAP ********************************************************

    *************
    localhost : ok=86 changed=37 unreachable=0 failed=1

    Please help us on this

Join The Discussion