2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

Change worker node count on a deployed Red Hat OpenShift Container Platform 4.x cluster on IBM Power Systems Virtual Servers

Introduction

This tutorial describes the steps to increase or decrease cluster capacity by resizing the worker pool on a deployed Red Hat OpenShift cluster on IBM Power Systems Virtual Servers using the User Provisioned Infrastructure (UPI) method.

Prerequisites

Before proceeding with resizing the worker pool, make sure that you have a cluster installed using the automation steps as described in Installing Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers.

Estimate time

Resizing the worker pool on a deployed Red Hat OpenShift cluster on a Power Virtual Server using the UPI method can take around 20 minutes.

Topology of a deployed test cluster

A test cluster, tstocp, was deployed using the steps as described in Installing Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers. The deployed cluster consists of a minimum of seven Power Virtual Server instances with:

  • One bastion (helper)
  • One bootstrap
  • Three controllers (masters)
  • Two workers

The cluster was deployed using the openshift-install-powervs automation script with the Terraform variables defined as follows:

openshift-install-power $ cat var.tfvars
ibmcloud_region = "tor"
ibmcloud_zone = "tor01"
service_instance_id = "xxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxx”
rhel_image_name =  "rhel-83-11242020"
rhcos_image_name =  "rhcos-46-09182020"
network_name =  "ocp-net"
openshift_install_tarball =  "https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable-4.6/openshift-install-linux.tar.gz"
openshift_client_tarball =  "https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable-4.6/openshift-client-linux.tar.gz"
cluster_id_prefix = "tstocp"
cluster_domain = "ibm.com"
storage_type = "none"
bastion = {memory = "16", processors = "1", "count" = 1}
bootstrap = {memory = "32", processors = "0.5", "count" = 1}
master = {memory = "32", processors = "0.5", "count" = 3}
worker = {memory = "32", processors = "0.5", "count" = 2}
rhel_subscription_username = "XXXX.XXXX
pull_secret_file = "$HOME/openshift-install-power/automation/data/pull-secret.txt"
private_key_file = "$HOME /.ssh /id_rsa"
public_key_file = "$HOME /.ssh /id_rsa.pub”

The output of the deployed cluster is as follows:

openshift-install-power $ openshift-install-powervs output
bastion_private_ip = 192.168.25.7
bastion_public_ip = 169.48.23.115
bastion_ssh_command = ssh root@169.48.23.115
bootstrap_ip =
cluster_authentication_details = Cluster authentication details are available in 169.48.23.115 under ~/openstack-upi/auth
cluster_id = tstocp-5368
dns_entries =
api.tstocp-5368.ibm.com.  IN  A  169.48.23.115
*.apps.tstocp-5368.ibm.com.  IN  A  169.48.23.115
etc_hosts_entries =
169.48.23.115 api.tstocp-5368.ibm.com console-openshift-console.apps.tstocp-5368.ibm.com integrated-oauth-server-openshift-authentication.apps.tstocp-5368.ibm.com oauth-openshift.apps.tstocp-5368.ibm.com prometheus-k8s-openshift-monitoring.apps.tstocp-5368.ibm.com grafana-openshift-monitoring.apps.tstocp-5368.ibm.com example.apps.tstocp-5368.ibm.com
install_status = COMPLETED
master_ips = [
  "192.168.25.167",
  "192.168.25.111",
  "192.168.25.87",
]
oc_server_url = https://api.tstocp-5368.ibm.com:6443
storageclass_name = nfs-storage-provisioner
web_console_url = https://console-openshift-console.apps.tstocp-5368.ibm.com
worker_ips = [
  "192.168.25.37",
  "192.168.25.241",
]

Steps

Perform the following steps to resize the number of worker nodes on the deployed cluster:

  1. Make sure that the IBMCLOUD_API_KEY, and RHEL_SUBS_PASSWORD environment variables are set, and the OpenShift pull secret file pull-secret.txt, is in the installation directory.
  2. Update the number of worker nodes to 4 in the var.tfvars file.

    openshift-install-power $ cat var.tfvars
    ibmcloud_region = "tor"
    ibmcloud_zone = "tor01"
    service_instance_id = "xxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxxxx”
    rhel_image_name =  "rhel-83-11242020"
    rhcos_image_name =  "rhcos-46-09182020"
    network_name =  "ocp-net"
    openshift_install_tarball =  "https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable-4.6/openshift-install-linux.tar.gz"
    openshift_client_tarball =  "https://mirror.openshift.com/pub/openshift-v4/ppc64le/clients/ocp/stable-4.6/openshift-client-linux.tar.gz"
    cluster_id_prefix = "tstocp"
    cluster_domain = "ibm.com"
    storage_type = "none"
    bastion = {memory = "16", processors = "1", "count" = 1}
    bootstrap = {memory = "32", processors = "0.5", "count" = 1}
    master = {memory = "32", processors = "0.5", "count" = 3}
    worker = {memory = "32", processors = "0.5", "count" = 4}
    rhel_subscription_username = "XXXX.XXXX
    pull_secret_file = "$HOME/openshift-install-power/automation/data/pull-secret.txt"
    private_key_file = "$HOME /.ssh /id_rsa"
    public_key_file = "$HOME /.ssh/id_rsa.pub”
    
  3. Run the following commands to export the API key and RHEL subscription password as environment variables and begin installation:

    set +o history
    export IBMCLOUD_API_KEY="<YOUR_IBM_CLOUD_API_KEY>" 
    export RHEL_SUBS_PASSWORD="<YOUR_RHEL_SUBSCRIPTION_PASSWORD>" 
    set -o history
    
    openshift-install-powervs create -var-file var.tfvars
    
  4. After successful installation, run the following command and view the cluster output:

    openshift-install-power $ openshift-install-powervs output
    bastion_private_ip = 192.168.25.7
    bastion_public_ip = 169.48.23.115
    bastion_ssh_command = ssh root@169.48.23.115
    bootstrap_ip =
    cluster_authentication_details = Cluster authentication details are available in 169.48.23.115 under ~/openstack-upi/auth
    cluster_id = tstocp-5368
    dns_entries =
    api.tstocp-5368.ibm.com.  IN  A  169.48.23.115
    *.apps.tstocp-5368.ibm.com.  IN  A  169.48.23.115
    
    etc_hosts_entries =
    169.48.23.115 api.tstocp-5368.ibm.com console-openshift-console.apps.tstocp-5368.ibm.com integrated-oauth-server-openshift-authentication.apps.tstocp-5368.ibm.com oauth-openshift.apps.tstocp-5368.ibm.com prometheus-k8s-openshift-monitoring.apps.tstocp-5368.ibm.com grafana-openshift-monitoring.apps.tstocp-5368.ibm.com example.apps.tstocp-5368.ibm.com
    
    install_status = COMPLETED
    master_ips = [
      "192.168.25.167",
      "192.168.25.111",
      "192.168.25.87",
    ]
    oc_server_url = https://api.tstocp-5368.ibm.com:6443
    storageclass_name = nfs-storage-provisioner
    web_console_url = https://console-openshift-console.apps.tstocp-5368.ibm.com
    worker_ips = [
      "192.168.25.37",
      "192.168.25.241",
      "192.168.25.49",
      "192.168.25.245",
    ]
    
  5. In case of error, refer to known issues to get more details about the potential issues and workarounds. Users can get Terraform console logs from the logs directory after each run.

  6. You can increase the worker node count from 2 to as many as you need by following the same steps. Similarly, you can decrease the worker node count also. When the number of worker nodes are decreased in the UPI installation, the scripts take care of draining the extra nodes and then marking them as unschedulable before deleting them.

Summary

This tutorial provides the steps for on-demand scaling up or down the computing power of a UPI cluster. Refer to the other tutorials in this learning path for other activities.