Installing Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers on IBM Cloud

This tutorial is part of the Learning path: Deploying Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers on IBM Cloud.

Introduction

IBM® Power Systems™ clients who have typically relied on on-premises-only infrastructure can now quickly and economically extend their IBM Power® IT resource off-premises using IBM Power Systems Virtual Server. It is now possible to take this a step further and have off-premises Red Hat® OpenShift® on the IBM Power Systems Virtual Server environment as well.

This tutorial shows you how to deploy Red Hat OpenShift on IBM Power Systems Virtual Servers using Terraform code to build the required infrastructure and then use that infrastructure to build the Red Hat OpenShift environment using Ansible® playbooks.

For general information about using IBM Power Systems Virtual Servers on IBM Cloud® refer to the following resources:

Deployment topology

The basic deployment of Red Hat OpenShift Container Platform consists of a minimum of seven Power Systems Virtual Server instances:

  • One bastion (helper)
  • One bootstrap
  • Three controllers (masters)
  • Two workers

You can delete the bootstrap instance after OpenShift Container Platform has been successfully deployed.

The minimum configuration for bastion, bootstrap, and controller instances is as follows:

  • One vCPU
  • 16 GB RAM
  • 120 GB (tier 3) storage

The minimum configuration for worker instances is as follows:

  • One vCPU
  • 32 GB RAM
  • 120 GB (tier 3) storage

Additionally, the bastion instance will have an extra 300 GB tier 3 storage for Network File System (NFS).

Bastion (helper)
The bastion instance hosts the following required services for OpenShift Container Platform:

  • Dynamic Host Configuration Protocol (DHCP) service for OpenShift Container Platform nodes
  • Domain Name System (DNS) service for the OpenShift Container Platform domain
  • HTTP file server to host ignition config files
  • HAProxy to load-balance traffic to OpenShift Container Platform controllers and ingress router
  • Squid proxy for OpenShift Container Platform nodes to access internet
  • NFS for persistent storage to containers

Note that the bastion instance is not highly available (HA).

Figure 1 shows a logical view of the OpenShift topology.

Figure 1. OpenShift deployment topology on Power Systems Virtual Servers

figure-1

Automation host prerequisites

This is the system from which the deployment automation will be triggered.

The automation needs to run from a system with internet access. This could be your laptop or a virtual machine (VM) with public internet connectivity. The automation code has been tested on the following operating systems:

  • Mac OS X (Darwin)
  • Linux® (x86_64)
  • Microsoft® Windows® 10

Install the following packages on the automation host.

  • Terraform 0.13.4:

    1. Download Terraform 0.13.4 release binary for your operating system from the following link
    2. Extract the package and move it to a directory included in your system’s PATH.
    3. Run the terraform version command after installation to validate that you are using 0.13.4 version.
  • Power Systems Virtual Server CLI: Download and install the CLI by referring to the instructions. Alternatively, you can use IBM Cloud shell directly from the browser itself.

  • Git (optional): Refer to these instructions for installing Git.

Power Systems Virtual Server prerequisites

Make sure you’ve completed the following prerequisites for accessing Power Systems Virtual Servers on IBM Cloud and getting it set for installing OpenShift Container Platform:

  • Create an IBM Cloud account.

    If you don’t already have one, you’ll need a paid IBM Cloud account before you can create your Power Systems Virtual Server instance. To create an account, go to: cloud.ibm.com

  • Create a Power Systems Virtual Server service.

    After you have an active IBM Cloud account, you can create a Power Systems Virtual Server service. To do so, perform the following steps:

    1. Log in to the IBM Cloud dashboard and search for Power in the catalog.

      figure 2 View a larger version of the figure

    2. Click Power Systems Virtual Server to provide the required details for the service.

      figure 3 View a larger version of the figure

    3. Provide a meaningful name for your instance in the Service name field.

    4. Select an appropriate resource group. You can find more details about resource groups at https://cloud.ibm.com/docs/account?topic=account-rgs

      figure 4 View a larger version of the figure

    5. Click Create to create the service.
  • Create a private network.
    A private network is required for your OpenShift Container Platform cluster. Perform the following steps to create a private network.

    1. Select the previously created service and create a private subnet by clicking Subnets and providing the required input.

      Note: If you see a screen displaying CRN and GUID, then click View full details to access the Subnet creation page.

      figure 5 View a larger version of the figure

      figure 6 View a larger version of the figure

      figure 7 View a larger version of the figure

    2. Create multiple OpenShift Container Platform clusters in the same service using the same private network. If required, you can also create multiple private networks.

  • Raise a service request.

    To enable the virtual instances within the service to communicate within the private subnet, you’ll need to create a service request. Perform the following steps to raise the service request.

    1. Click Support at the top of the page, scroll down to the Contact Support section, and then click Create a case.

      figure 8 View a larger version of the figure

    2. Select the Power Systems Virtual Server tile, then complete the details by pasting the following subject and description into the appropriate fields on the Create a case page:

      [Subject:] Enable communication between PowerVS instances on private network

      [Description:]

      
      Please enable IP communication between PowerVS instances for the following private network:
      Name: <your-subnet-name-from-above>
      Type: Private
      CIDR: <your ip subnet-from-above>
      VLAN ID: <your-vlan-id> (listed in your subnet details post-creation)
      Location: <your-location> (listed in your subnet details post-creation)
      Service Instance: <your-service-name>
      

      figure 9 View a larger version of the figure

    3. Click Continue to accept the agreements and then click Submit case.

  • Import RHCOS and RHEL 8.2 images.

    The Red Hat Enterprise Linux (RHEL) 8.2 image is used by the bastion and Red Hat Enterprise Linux CoreOS (RHCOS) is used on the OpenShift cluster nodes.

    You need to create OVA formatted images for both RHEL and RHCOS, upload them to IBM Cloud Object storage, and then import these images as boot images in your Power Systems Virtual Server service. The image disk should be a minimum of 120 GB in size. To do so, perform the following steps:

    1. Create the OVA images.

    2. Upload the images to IBM Cloud Object Storage.

      1. Create the IBM Cloud Object Storage service and bucket.
        Power Systems Virtual Server currently supports import from only us-east, us-south, and eu-de regions. Therefore, ensure that you create the Cloud Object Storage bucket in one of these regions.
      2. Create secret and access keys with Hash-based Message Authentication Code (HMAC).
      3. Upload the OVA image to the Cloud Object Storage bucket. Note that you can also use this Python script if you prefer.
    3. Import the images to the Power Systems Virtual Server.

      To import the RHEL image for the bastion and the RHOCS image for the OpenShift Container Platform cluster, perform the following steps:

      1. Choose the Power Systems Virtual Server service you previously created.
      2. Click View full details and select Boot images.
      3. Click the Importing image option and fill the required details such as image name, storage type, and cloud object storage details.

      Refer to the following example screen capture showing the import of an RHEL image that is used for the bastion.

      figure 10 View a larger version of the figure

      Refer to the following example screen capture showing the import of a RHOCS image used for the OpenShift Container Platform cluster nodes.

      figure 11 View a larger version of the figure

      You are now ready to install OpenShift Container Platform on your Power Systems Virtual Server service.

Installing OpenShift Container Platform on Power Systems Virtual Server

Perform the following steps to install OpenShift Container Platform on the Power Systems Virtual Server service you created as part of the prerequisite steps:

  1. Download the automation code.

    Go to the release page and download the latest stable release. Extract the release bundle to your system.

    You can also use curl or wget to download the stable release code as shown below.

    Replace 4.5.3 with the latest available release version as on date.

    
    curl -L https://github.com/ocp-power-automation/ocp4-upi-powervs/archive/v4.5.3.zip -o v4.5.3.zip
    unzip v4.5.3.zip
    cd ocp4-upi-powervs-4.5.3
    

    You can also clone the Git repository on your system. Ensure you checkout the release tag when using Git.

    
    git clone https://github.com/ocp-power-automation/ocp4-upi-powervs.git -b v4.5.3 ocp4-upi-powervs-4.5.3
    cd ocp4-upi-powervs-4.5.3
    

    All further instructions assume that you are in the code directory, ocp4-upi-powervs-4.5.3.

    Note that the directory will be different based on the release that you download.

  2. Set up the Terraform variables.

    Update the var.tfvars file based on your environment. Please refer to the description of the variables before updating.

  3. Start the installation.

    Run the following commands from within the code directory (for example, ocp4-upi-powervs-4.5.3).

    $ terraform init
    $ terraform apply -var-file var.tfvars -parallelism=3

    Note: We have used parallelism to restrict parallel instance creation requests using the Power Systems Virtual Server client. This is due to a known issue where the terraform apply command fails at random parallel instance create requests. If you still get the error while creating the instance, you must delete the failed instance from the Power Systems Virtual Server console and then run the terraform apply command again.

    Now wait for the installation to complete. It may take around 60 min to complete provisioning.

    After successful installation, the cluster details will be displayed as shown in the following output.

    
    bastion_private_ip = 192.168.25.171
    bastion_public_ip = 16.20.34.5
    bastion_ssh_command = ssh -i data/id_rsa root@16.20.34.5
    bootstrap_ip = 192.168.25.182
    cluster_authentication_details = Cluster authentication details are available in 16.20.34.5 under ~/openstack-upi/auth
    cluster_id = test-cluster-9a4f
    etc_hosts_entries =
    16.20.34.5 api.test-cluster-9a4f.mydomain.com console-openshift-console.apps.test-cluster-9a4f.mydomain.com integrated-oauth-server-openshift-authentication.apps.test-cluster-9a4f.mydomain.com oauth-openshift.apps.test-cluster-9a4f.mydomain.com prometheus-k8s-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com grafana-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com example.apps.test-cluster-9a4f.mydomain.com
    
    install_status = COMPLETED
    master_ips = [
    "192.168.25.147",
    "192.168.25.176",
    ]
    oc_server_url = https://test-cluster-9a4f.mydomain.com:6443
    storageclass_name = nfs-storage-provisioner
    web_console_url = https://console-openshift-console.apps.test-cluster-9a4f.mydomain.com
    worker_ips = [
    "192.168.25.220",
    "192.168.25.134",
     ]
    

    When using a wildcard domain such as nip.io or xip.io then etc_host_entries is empty, as shown in the following output.

    
    bastion_private_ip = 192.168.25.171
    bastion_public_ip = 16.20.34.5
    bastion_ssh_command = ssh -i data/id_rsa root@16.20.34.5
    bootstrap_ip = 192.168.25.182
    cluster_authentication_details = Cluster authentication details are available in 16.20.34.5 under ~/openstack-upi/auth
    cluster_id = test-cluster-9a4f
    etc_hosts_entries =
    install_status = COMPLETED
    master_ips = [
    "192.168.25.147",
    "192.168.25.176",
    ]
    oc_server_url = https://test-cluster-9a4f.16.20.34.5.nip.io:6443
    storageclass_name = nfs-storage-provisioner
    web_console_url = https://console-openshift-console.apps.test-cluster-9a4f.16.20.34.5.nip.io
    worker_ips = [
    "192.168.25.220",
    "192.168.25.134",
     ]
    

    These details can be retrieved anytime by running the following command from the root folder of the code:

    $ terraform output

    In case of any errors, you’ll have to run the terraform apply command again. Refer to known issues to get more details on potential issues and workarounds.

Post installation

Complete the following post-installation tasks:

  1. Delete the bootstrap node.

    After the deployment is completed successfully, you can safely delete the bootstrap node. This step is optional but recommended to free up the resources used.

    1. Change the count value to 0 in the bootstrap map variable and re-run the apply command. For example:

      bootstrap = {memory = "16", processors = "0.5", "count" = 0}

    2. Run the following command:

      terraform apply -var-file var.tfvars

  2. Create the API and ingress DNS records.

    Skip this section if your cluster_domain is one of the online wildcard DNS domains: nip.io, xip.io, and sslip.io. For all other domains, you can use one of the following options.

    1. Add entries to your DNS server.
      The general format is shown below:

      
      api.<cluster_id>.  IN  A  <bastion_public_ip>
      *.apps.<cluster_id>.  IN  A  <bastion_public_ip>
      You'll need bastion_public_ip and cluster_id. This is printed at the end of a successful install. Or you can retrieve it anytime by running terraform output from the install directory. For example, if bastion_public_ip = 16.20.34.5 and cluster_id = test-cluster-9a4f then the following DNS records will need to be added.
      api.test-cluster-9a4f.  IN  A  16.20.34.5
      *.apps.test-cluster-9a4f.  IN  A  16.20.34.5
      
    2. Add entries to your client system hosts file.
      For Linux and Mac hosts, the file is located at /etc/hosts and for Windows it is located at c:\Windows\System32\Drivers\etc\hosts. The general format is shown below:

      
      <bastion_public_ip> api.<cluster_id>
      <bastion_public_ip> console-openshift-console.apps.<cluster_id>
      <bastion_public_ip> integrated-oauth-server-openshift-authentication.apps.<cluster_id>
      <bastion_public_ip> oauth-openshift.apps.<cluster_id>
      <bastion_public_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_id>
      <bastion_public_ip> grafana-openshift-monitoring.apps.<cluster_id>
      <bastion_public_ip> <app name>.apps.<cluster_id>
      

      You’ll need etc_host_entries. This is printed at the end of a successful installation. Alternatively, you can retrieve it anytime by running the terraform output command from the install directory. As an example, for the following etc_hosts_entries:

      
      etc_hosts_entries =
      16.20.34.5 api.test-cluster-9a4f.mydomain.com console-openshift-console.apps.test-cluster-9a4f.mydomain.com integrated-oauth-server-openshift-authentication.apps.test-cluster-9a4f.mydomain.com oauth-openshift.apps.test-cluster-9a4f.mydomain.com prometheus-k8s-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com grafana-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com example.apps.test-cluster-9a4f.mydomain.com
      
    3. Add the following entry to the hosts file:

      
      [existing entries in hosts file]
      
      16.20.34.5 api.test-cluster-9a4f.mydomain.com console-openshift-console.apps.test-cluster-9a4f.mydomain.com integrated-oauth-server-openshift-authentication.apps.test-cluster-9a4f.mydomain.com oauth-openshift.apps.test-cluster-9a4f.mydomain.com prometheus-k8s-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com grafana-openshift-monitoring.apps.test-cluster-9a4f.mydomain.com example.apps.test-cluster-9a4f.mydomain.com
      

Accessing the cluster

OpenShift login credentials are in the bastion host and the location will be printed at the end of a successful installation. Alternatively, you can retrieve it anytime by running the terraform output command from the code directory (for example, ocp4-upi-powervs-4.5.4).

[...]
bastion_public_ip = 16.20.34.5
bastion_ssh_command = ssh -i data/id_rsa root@16.20.34.5
cluster_authentication_details = Cluster authentication details are available in 16.20.34.5 under ~/openstack-upi/auth
[...]

There are two files under ~/openstack-upi/auth.

  • kubeconfig: This file can be used for CLI access.
  • kubeadmin-password: This file provides the password for the kubeadmin user, which can be used for CLI and UI access.

Note: Ensure you securely store the OpenShift cluster access credentials. If required, delete the access details from the bastion node after securely storing the same.

You can copy the access details to your local system:
$ scp -r -i data/id_rsa root@158.175.161.118:~/openstack-upi/auth/\*

Using CLI

OpenShift CLI can be downloaded from the following links. Use the option specific to your client system architecture.

Download the specific file, extract it and place the binary in a directory that is on your PATH. Refer to the Getting started with CLI documentation for more details.

The CLI login URL oc_server_url will be printed at the end of a successful installation. Alternatively, you can retrieve it anytime by running the terraform output command from the install directory.

[...]

oc_server_url = https://test-cluster-9a4f.mydomain.com:6443
[...]

In order to log in to the cluster, you can use the command: oc login <oc_server_url> -u kubeadmin -p

<kubeadmin-password>

Example:

$ oc login https://test-cluster-9a4f.mydomain.com:6443 -u kubeadmin -p $(cat kubeadmin-password)

You can also use the kubeconfig file.


$ export KUBECONFIG=$(pwd)/kubeconfig
$ oc cluster-info
Kubernetes master is running at https://test-cluster-9a4f.mydomain.com:6443



$ oc get nodes
NAME       STATUS   ROLES    AGE   VERSION
master-0   Ready    master   13h   v1.18.3+b74c5ed
master-1   Ready    master   13h   v1.18.3+b74c5ed
master-2   Ready    master   13h   v1.18.3+b74c5ed
worker-0   Ready    worker   13h   v1.18.3+b74c5ed
worker-1   Ready    worker   13h   v1.18.3+b74c5ed

Note: The OpenShift command-line client oc is already configured on the bastion node with kubeconfig placed at ~/.kube/config.

Using the web console

The web console URL will be printed at the end of a successful installation. Alternatively, you can retrieve it anytime by running the terraform output command from the install directory.

[...]
web_console_url = https://console-openshift-console.apps.test-cluster-9a4f.mydomain.com
[...]

Open this URL in your browser and log in with the username as kubeadmin and the password mentioned in the kubeadmin-password file.

Clean up

To destroy the cluster after using it, run the terraform destroy -var-file var.tfvars -parallelism=3 command to make sure that all the resources are properly cleaned up. Do not manually clean up your environment unless both of the following conditions are true:

  • You know what you are doing.
  • Something went wrong with an automated deletion.

Summary

After you have an OpenShift cluster running, you can start building and deploying your applications. Refer to the other tutorials in this learning path for more details.