Introduction
This tutorial explains how to install Red Hat OpenShift 4.X on IBM Cloud classic infrastructure or on IBM Cloud for Government (IC4G) virtual server instances (VSIs).
By following this tutorial, you gain the advantages of inheriting infrastructure as a service (IaaS) Federal Risk and Authorization Management Program (FedRAMP) high controls from the IC4G infrastructure layer and of using OpenShift to deploy container workloads on an enterprise-grade cloud. Your enterprise will have complete control of the management and data plane of OpenShift to manage the environment entirely as you have done on-premises. If your organization invested in IBM Cloud classic infrastructure, you can reuse the techniques and knowledge to deploy OpenShift 4.X. The cost of standing up OpenShift can be set to either hourly or monthly charges and your organization can quickly scale OpenShift to meet your development, test, and production use-case environments. It also provides the option to update OpenShift on an independent schedule to fit with your project timeline.
Note: This tutorial uses IBM Cloud Virtual Servers and you may incur costs while using a VSI for this tutorial. The IBM Cloud Cost Estimator can generate a cost estimate for you based on your projected usage.
Architecture diagram
The following architecture diagram depicts the deployment of three main nodes and three worker nodes of OpenShift using Ansible automation scripts. The OpenShift infrastructure is created behind an IBM Cloud private virtual local area network (VLAN) and protected by the Vyatta firewall.
Tasks performed by the Ansible playbook
An Ansible playbook is used to deploy OpenShift using an organized unit of scripts that works with VSI server configuration. The entire process is managed by the Ansible automation tool.
- Uses SoftLayer API Python Client (scli) to order the CentOS VSI.
- Forces VSI to boot using the compiled Linux Kernel.
Assigns an IP address and reboots the VSI to:
- Pull a CoreOS image.
- Create an Ignition file.
- Install OpenShift nodes.
Before you begin
The Automation scripts use underlying Ansible technologies to deploy OpenShift. Before you begin the process, know that the Ansible scripts will be using the scli
command line to deploy the VSI. You will need the git
command line to download the repository and IBM Cloud classic infrastructure access information, as mentioned below.
Note: Contact the primary user of your infrastructure or the account owner to get the following permissions:
- VLAN ID, which is a Network Address Translation (NAT) masquerade to a private internet network uplink through Vyatta.
- API key.
Prerequisites
This tutorial requires:
Apple MacBook (macOS) or Linux desktop (CentOS or Red Hat Enterprise Linux (RHEL)) to kick off the Ansible playbook. Ensure Ansible is installed by executing the following command:
$ ansible --version
If you are using a CentOS or RHEL, make sure the following packages are installed:
$ yum install git gcc gcc-c++ python-devel cairo-devel gobject-introspection-devel cairo-gobject-devel ansible
If you are using macOS, make sure the following packages are installed:
$ brew install pygobject3 gtk+3 libffi python cairo pkg-config gobject-introspection atk glib
pip, the package installer for Python. Ensure pip is installed by executing the following command:
$ pip --version
Motion Pro virtual private network to access IBM Cloud or IC4G.
- Git to clone the source code.
- Private VLAN routed through Vyatta using NAT masquerade to Internet for public internet access to pull down OpenShift binaries and updates.
Estimated time
The tasks in this tutorial should take you about one hour to complete. The VSI provisioning and OpenShift installation can take up to 45 minutes of that time.
Steps
Clone the Ansible playbook repo:
$ git clone https://github.com/IBM/OCP_4.X_VSI.git
Run the following commands to validate all prerequisites are installed:
$ cd OCP_4.X_VSI
For MAC OS run the following command $ export PKG_CONFIG_PATH="/usr/local/opt/libffi/lib/pkgconfig"
$ while read p; do pip install --ignore-installed ${p}; done <artifacts/pip-req.txt
Copy the variable file:
$ cp vars.yaml.template vars.yaml
Update the variables in the
var.yaml
file:$ vi vars.yaml
IBM Cloud classic infrastructure or IC4G information:
- sl_username:
<infrastructure_username>
- sl_api_key:
<infrastructure_apikey>
- sl_endpoint_url:
<infrastrucre_services_url>
IBM Cloud classic infrastructure resource or IC4G information:
- sl_datacenter:
<infrastructure_datacenter_code>
- sl_private_vlan:
<infrastructure_private_vlan_number>
sl_vlan_info:
- vlan_first_ip:
<vlan_first_ip_address>
- vlan_last_ip:
<vlan_last_ip_address>
- netmask_cidr:
/<vlan_netmask>
- vlan_first_ip:
OpenShift information:
- pullsecret:
<openshift_install_key>
OpenShift host information:
- base_domain:
<your_base_domain>
- base_domain_prefix:
<your_base_domain_geolocation>
- sl_ocp_host_prefix:
<your_environment_name_identifer>
Hint: Make sure your
<infrastructure_private_vlan_number>
status isRoute Through
, as illustrated in the following screen capture images:- sl_username:
Execute the following bash scripts to run the Ansible playbook:
$ chmod +x create_ocp_instance.sh delete_ocp_instance.sh
$ ./create_ocp_instance.sh
Verify OpenShift installation
In this section, you are verifying the OpenShift installation to ensure it created successfully.
Verify virtual machine
- Log into IBM Cloud or IC4G
- On the menu, click Infrastructure to view the list of virtual server devices.
- Click Devices -> Device List to find the server that was created. You should see your server device listed.
Verify OpenShift
The last task of the Ansible playbook prints out a message with
userid
andpassword
, such as the following:> TASK [validate_ic4g_ocp_servers : debug] ********************************************************************************************************************************************************** ok: [10.5X.9X.4X] => { "openshiftcomplete.stderr_lines": [ "level=info msg=\"Waiting up to 30m0s for the cluster at https://api.wdc01.ibm.com:6443 to initialize...\"", "level=info msg=\"Waiting up to 10m0s for the openshift-console route to be created...\"", "level=info msg=\"Install complete!\"", "level=info msg=\"To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/opt/ocp4/auth/kubeconfig'\"", "level=info msg=\"Access the OpenShift web-console here: https://console-openshift-console.apps.wdc01.ibm.com\"", "level=info msg=\"Login to the console with user: kubeadmin, password: eXh7T-YkjJ6-VCXji-DfZXV\""
Add the helper node IP address to your DNS or
/etc/hosts
file. For example:<helpernode_ip_address> console-openshift-console.apps.
<base_domain_prefix>.<base_domain> oauth-openshift.apps.
<base_domain_prefix>.<base_domain>
Open a web browser and type in the following URL:
https://console-openshift-console.apps.<base_domain_prefix>.<base_domain>
Log in with the OpenShift user ID (such as
kubeadmin
, which is used in this example).
Summary
Hopefully, you found this tutorial helpful and educational for deploying OpenShift 4.X on the virtual classic infrastructure layer. Once the OpenShift Container Platform 4.X solution is deployed, it inherits all of the advantages of IBM Cloud. For example, elasticity of CPU, and memory and disk. The Ansible playbook supports N number of worker nodes as part of the cluster from day one installation, and the deployment inherits all of the security features provided by CoreOS.