Digital Developer Conference: Hybrid Cloud 2021. On Sep 21, gain free hybrid cloud skills from experts and partners. Register now

Deploy Red Hat OpenShift 4 on IBM Cloud classic infrastructure or IBM Cloud for Government


This tutorial explains how to install Red Hat OpenShift 4.X on IBM Cloud classic infrastructure or on IBM Cloud for Government (IC4G) virtual server instances (VSIs).

By following this tutorial, you gain the advantages of inheriting infrastructure as a service (IaaS) Federal Risk and Authorization Management Program (FedRAMP) high controls from the IC4G infrastructure layer and of using OpenShift to deploy container workloads on an enterprise-grade cloud. Your enterprise will have complete control of the management and data plane of OpenShift to manage the environment entirely as you have done on-premises. If your organization invested in IBM Cloud classic infrastructure, you can reuse the techniques and knowledge to deploy OpenShift 4.X. The cost of standing up OpenShift can be set to either hourly or monthly charges and your organization can quickly scale OpenShift to meet your development, test, and production use-case environments. It also provides the option to update OpenShift on an independent schedule to fit with your project timeline.

Note: This tutorial uses IBM Cloud Virtual Servers and you may incur costs while using a VSI for this tutorial. The IBM Cloud Cost Estimator can generate a cost estimate for you based on your projected usage.

Architecture diagram

The following architecture diagram depicts the deployment of three main nodes and three worker nodes of OpenShift using Ansible automation scripts. The OpenShift infrastructure is created behind an IBM Cloud private virtual local area network (VLAN) and protected by the Vyatta firewall.

architecture diagram

Tasks performed by the Ansible playbook

An Ansible playbook is used to deploy OpenShift using an organized unit of scripts that works with VSI server configuration. The entire process is managed by the Ansible automation tool.

  1. Uses SoftLayer API Python Client (scli) to order the CentOS VSI.
  2. Forces VSI to boot using the compiled Linux Kernel.
  3. Assigns an IP address and reboots the VSI to:

    • Pull a CoreOS image.
    • Create an Ignition file.
    • Install OpenShift nodes.

Before you begin

The Automation scripts use underlying Ansible technologies to deploy OpenShift. Before you begin the process, know that the Ansible scripts will be using the scli command line to deploy the VSI. You will need the git command line to download the repository and IBM Cloud classic infrastructure access information, as mentioned below.

Note: Contact the primary user of your infrastructure or the account owner to get the following permissions:


This tutorial requires:

  • Apple MacBook (macOS) or Linux desktop (CentOS or Red Hat Enterprise Linux (RHEL)) to kick off the Ansible playbook. Ensure Ansible is installed by executing the following command:

    $ ansible --version
    • If you are using a CentOS or RHEL, make sure the following packages are installed:

      $ yum install git gcc gcc-c++ python-devel cairo-devel gobject-introspection-devel cairo-gobject-devel ansible
    • If you are using macOS, make sure the following packages are installed:

      $ brew install pygobject3 gtk+3 libffi python cairo pkg-config gobject-introspection atk glib
  • pip, the package installer for Python. Ensure pip is installed by executing the following command:

    $ pip --version
  • Motion Pro virtual private network to access IBM Cloud or IC4G.

  • Git to clone the source code.
  • Private VLAN routed through Vyatta using NAT masquerade to Internet for public internet access to pull down OpenShift binaries and updates.

Estimated time

The tasks in this tutorial should take you about one hour to complete. The VSI provisioning and OpenShift installation can take up to 45 minutes of that time.


  1. Clone the Ansible playbook repo:

    $ git clone
  2. Run the following commands to validate all prerequisites are installed:

    $ cd OCP_4.X_VSI
    For MAC OS run the following command
    $ export PKG_CONFIG_PATH="/usr/local/opt/libffi/lib/pkgconfig"
    $ while read p; do pip install  --ignore-installed ${p}; done <artifacts/pip-req.txt
  3. Copy the variable file:

    $ cp vars.yaml.template vars.yaml
  4. Update the variables in the var.yaml file:

    $ vi vars.yaml

    IBM Cloud classic infrastructure or IC4G information:

    • sl_username: <infrastructure_username>
    • sl_api_key: <infrastructure_apikey>
    • sl_endpoint_url: <infrastrucre_services_url>

    IBM Cloud classic infrastructure resource or IC4G information:

    • sl_datacenter: <infrastructure_datacenter_code>
    • sl_private_vlan: <infrastructure_private_vlan_number>
    • sl_vlan_info:

      • vlan_first_ip: <vlan_first_ip_address>
      • vlan_last_ip: <vlan_last_ip_address>
      • netmask_cidr: /<vlan_netmask>

    OpenShift information:

    • pullsecret: <openshift_install_key>

    OpenShift host information:

    • base_domain: <your_base_domain>
    • base_domain_prefix: <your_base_domain_geolocation>
    • sl_ocp_host_prefix: <your_environment_name_identifer>

    Hint: Make sure your <infrastructure_private_vlan_number> status is Route Through, as illustrated in the following screen capture images:

    screen capture 1

    screen capture 2

    screen capture 3

  5. Execute the following bash scripts to run the Ansible playbook:

    $ chmod +x
    $ ./

Verify OpenShift installation

In this section, you are verifying the OpenShift installation to ensure it created successfully.

Verify virtual machine

  1. Log into IBM Cloud or IC4G
  2. On the menu, click Infrastructure to view the list of virtual server devices.
  3. Click Devices -> Device List to find the server that was created. You should see your server device listed.

Verify OpenShift

  1. The last task of the Ansible playbook prints out a message with userid and password, such as the following:

    > TASK [validate_ic4g_ocp_servers : debug] **********************************************************************************************************************************************************
    ok: [10.5X.9X.4X] => {
       "openshiftcomplete.stderr_lines": [
           "level=info msg=\"Waiting up to 30m0s for the cluster at to initialize...\"",
           "level=info msg=\"Waiting up to 10m0s for the openshift-console route to be created...\"",
           "level=info msg=\"Install complete!\"",
           "level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/opt/ocp4/auth/kubeconfig'\"",
           "level=info msg=\"Access the OpenShift web-console here:\"",
           "level=info msg=\"Login to the console with user: kubeadmin, password: eXh7T-YkjJ6-VCXji-DfZXV\""
  2. Add the helper node IP address to your DNS or /etc/hosts file. For example:

    • <helpernode_ip_address> console-openshift-console.apps.
    • <base_domain_prefix>.<base_domain> oauth-openshift.apps.
    • <base_domain_prefix>.<base_domain>
  3. Open a web browser and type in the following URL: https://console-openshift-console.apps.<base_domain_prefix>.<base_domain>

    Log in with the OpenShift user ID (such as kubeadmin, which is used in this example).


Hopefully, you found this tutorial helpful and educational for deploying OpenShift 4.X on the virtual classic infrastructure layer. Once the OpenShift Container Platform 4.X solution is deployed, it inherits all of the advantages of IBM Cloud. For example, elasticity of CPU, and memory and disk. The Ansible playbook supports N number of worker nodes as part of the cluster from day one installation, and the deployment inherits all of the security features provided by CoreOS.