Introduction
The recently launched IBM Cloud Pak System can help accelerate your implementation of on-premises Kubernetes platforms. It comes with support for automated deployment and configuration of both Red Hat OpenShift Container Platform (RHOCP) and IBM Cloud Private. This tutorial focuses on Red Hat OpenShift Container Platform, but insights about using IBM Cloud Private on Cloud Pak System can be found in the tutorial Getting started with the IBM Cloud Private Pattern for IBM PureApplication.
For Red Hat OpenShift, it’s important to know that there are several different offerings available:
OpenShift Online
A fully managed public cloud offering for quickly deploying applications.
OpenShift Dedicated
A fully managed private cloud OpenShift cluster hosted on Amazon Web Services (AWS).
OpenShift Container Platform (RHOCP)
An enterprise OpenShift cluster deployed on your own on-premises infrastructure. (RHOCP was previously called OpenShift Enterprise, but the name was changed with the release of version 3.3.)
A more detailed comparison of these offerings can be found on the OpenShift website. Since Cloud Pak System is an on-premises appliance, it only provides support for the RHOCP offering. In this tutorial, you’ll learn how to deploy RHOCP on a Cloud Pak System. We wrote the steps by assuming that the Cloud Pak System does not have direct access to the internet and by using Cloud Pak System 2.3.0.1 firmware.
Estimated time
There are many factors that influence how quickly this tutorial can be completed. The majority of time is typically spent in the Prerequisites section. Once this is done, working through the remaining steps should take one to two hours.
Prerequisites
Before you can deploy an OpenShift cluster on Cloud Pak System, a number of prerequisites need to be in place. IBM Knowledge Center provides a good starting point for those prerequisites:
- IBM Cloud Pak System 2.3.0.0 or higher (W3500/W3550), or IBM PureApplication Platform 2.2.6.0 (W1500/W2500/W3500/W3550). Support for IBM PureApplication System W3700 is planned for a future release.
- IBM OS image for Red Hat Linux Systems (Red Hat Base OS Image) Version 7.5 or your own OS image with Red Hat Enterprise Linux (RHEL) 7.5 or higher.
- Red Hat Satellite 6 service connected to an external Red Hat Satellite Server (RHSS) or an internal RHSS.
- Subscriptions for RHOCP and RHEL Version 7.0 enabled and synchronized in RHSS. (Note: Cloud Pak System comes with subscriptions for RHEL and RHSS. This is different than the subscription for RHOCP, which is not included with Cloud Pak System.)
The first three prerequisites here are fairly obvious, but we will take you through the details of what is exactly needed for RHSS in the Prepare Red Hat Satellite Server section. Since we are making the assumption that you are working on an environment without direct internet access, the Create a Private Docker Registry section will show how to create your own private Docker registry to support the offline installation of OpenShift.
Prepare Red Hat Satellite Server
Most companies choose to integrate their (Intel-based) Cloud Pak System and PureApplication Platform client virtual machines (VMs) with RHSS 6. When deploying VMs using RHEL 6 or 7, it provides a straightforward process for performing RHEL OS maintenance. (For example, installing security patches on a regular basis.) Of course, it also greatly simplifies the installation of new RPM packages and their dependencies. (For example, you can simply perform a yum install <package-name>
from a shell.) You can either deploy RHSS 6 on Cloud Pak System itself, or integrate with an existing RHSS that is already in place. IBM recommends using RHSS 6.4 or higher with Cloud Pak System and IBM Support details how to set it up.
Assuming you have RHSS 6.4 or higher in place, some additional steps are required to deploy RHOCP on Cloud Pak System:
Step 1. Activate the OpenShift subscription codes.
Log on to the Red Hat Customer Portal, enter your subscription activation number in the corresponding field, and click Next, as shown in Figure 1.
Figure 1: Activate your subscription in Red Hat Customer Portal
Select Start a new subscription, as shown in Figure 2, click Next, and complete the rest of the subscription activation process. (Note: Activation of your subscription can take between 30 minutes and 48 hours. Contact Red Hat support if your subscription is not activated within that time frame.)
Figure 2: Start a new subscription in Red Hat Customer Portal
To update the existing Red Hat Satellite manifest, go to Subscriptions and Allocations on the Red Hat Customer Portal and search for your existing Red Hat Satellite manifest name, as shown in Figure 3.
Note: Refer to the How do I create RedHat manifest file and deploy RedHat Satellite Server 6.4_V6_6 _guide provided by IBM Support.
Figure 3: Add your newly activated OpenShift subscription to your Red Hat Satellite manifest.
Select your manifest and add the newly activated RHOCP subscriptions to it.
Now you need to refresh the manifest on your RHSS. To do so, log on to your RHSS as an administrator, go to Content > Subscriptions and click Manage Manifest, as shown in Figure 4.
Figure 4: Manage your Red Hat Satellite manifest from RHSS.
Click Refresh, as shown in Figure 5, to synchronize the subscriptions added to the existing Satellite manifest.
Figure 5: Refresh your Red Hat Satellite manifest from RHSS.
Once the RHOCP subscription is associated with the existing satellite server manifest in your Red Hat account and synchronized, you should be able to see the RHOCP subscription in RHSS 6.x under Content > Subscriptions, as shown in Figure 6
Figure 6: Red Hat OpenShift subscriptions in RHSS.
Step 2. Now you will enable several Red Hat repositories in Red Hat Satellite Server. RHOCP 3.10 and 3.11 each require a different set of Red Hat repositories, so you can enable what you need or simply enable all five of them.
Repository name | Repository identifier | Required for RHOCP |
---|---|---|
Red Hat Enterprise Linux 7 Server Extra RPMs | rhel-7-server-extras-rpms |
3.10 and 3.11 |
Red Hat OpenShift Container Platform 3.11 (RPMs) | rhel-7-server-ose-3.11-rpms |
3.11 |
Red Hat Ansible Engine 2.6 RPMs for Red Hat Enterprise Linux 7 Server | rhel-7-server-ansible-2.6-rpms |
3.11 |
Red Hat OpenShift Container Platform 3.10 RPMs x86-64 | rhel-7-server-ose-3.10-rpms |
3.10 |
Red Hat Ansible Engine 2.4 RPMs for Red Hat Enterprise Linux 7 Server | rhel-7-server-ansible-2.4-rpms |
3.10 |
Go to Content > Red Hat Repositories and search for one of the repositories listed under Available Repositories. When the desired repository shows up, expand it and enable the repository by clicking the plus sign next to it. (Figure 7 shows how to do this for the RHOCP 3.11 (RPMs) repository.) Once enabled, you should see it listed under Enabled Repositories, as shown in Figure 8. Repeat this step for each of the Red Hat repositories you need.
By enabling these repositories, RHSS will download the RPMs from them.
Figure 7: Enabling additional Red Hat repositories in RHSS.
Figure 8: Viewing enabled Red Hat repositories in RHSS.
Step 3. Go to Content > Sync Status, as shown in Figure 9, to confirm that the Result column shows a status of Syncing Complete
for each repository or that they are downloaded. You may need to trigger the synchronization process or create a schedule to automatically perform the synchronization at regular intervals.
Figure 9: Confirming synchronization status of repositories in RHSS.
Step 4. Go to Content > Content Views and find default_contentview
. This view is associated with any VM that gets deployed and registered with RHSS. For example, it determines what RPMs they can “see.” This view needs to be updated to include the newly added repositories.
Select the Yum Content tab. Under the Repository Selection section, select Add. You should see all Red Hat repositories you enabled in Step 2. Select all of the repositories and click Add Repositories, as shown in Figure 10. When done, click Publish New Version.
Figure 10: Adding new Red Hat repositories to default_contentview in RHSS.
Step 5. Before you proceed, make sure that the default_contentview page you just updated is associated with the activation key you uses on Cloud Pak System. (Note: Consider using a separate activation key and content view for RHOCP. This allows for your RHOCP subscription to only be associated with a subset of the VMs deployed on Cloud Pak System.)
Confirm the activation key is associated with your content view by navigating to Content > Activation Keys, as shown in Figure 11.
Figure 11: The osh
activation key is associated with default_contentview
in RHSS.
Within Cloud Pak System, on the Shared Service Instances page, the deployed Red Hat Satellite Six Service
instance shows the same activation key, as shown in Figure 12.
Figure 12: The Red Hat Satellite Six Service
instance is associated with the osh
activation key.
Step 6. From a deployed VM that has registered with RHSS, confirm that you have an RPM package from each of the repositories at your disposal from the yum
command line tool. The table below lists the repositories and an RPM for each of those.
Repository | RPM |
---|---|
rhel-7-server-ose-3.11-rpms |
atomic-openshift-hyperkube |
rhel-7-server-ansible-2.6-rpms |
ansible |
rhel-7-server-ose-3.10-rpms |
atomic-openshift-hyperkube |
rhel-7-server-ansible-2.4-rpms |
ansible |
rhel-7-server-extras-rpms |
docker.x86_64 |
By default, only the repositories rhel-7-server-rh-common-rpms
and rhel-7-server-rpms
are automatically enabled when a VM is deployed on Cloud Pak System. When deploying the RHOCP patterns, the additional necessary repositories are automatically enabled by the software component of the pattern. However, on the deployed VM that we use for test purposes, we have to manually enable these repositories before we can prove that we can install an RPM from each.
Enable the Red Hat repositories on the deployed VM. For example, if you plan to deploy RHOCP 3.11, enable these three:
rhel-7-server-extras-rpms
rhel-7-server-ose-3.11-rpms
rhel-7-server-ansible-2.6-rpms
While logged on as root, run the command below to enable the
rhel-7-server-extras-rpms
repository:-bash-4.2# subscription-manager repos --enable rhel-7-server-extras-rpms 2>/dev/null Repository ‘rhel-7-server-extras-rpms’ is enabled for this system.
(Note: We redirected
stderr output
to/dev/null
for thesubscription-manager
andyum
commands. Normally, this is not required, when the directory/var/log/rhsm
is present, nostderr
output is shown on the command line.)Run the command
yum info <RPM>
to demonstrate that the RPM is at your disposal as shown below. Repeat the command for each RPM you need:ansible
atomic-openshift-hyperkube
docker
-bash-4.2# yum info docker Loaded plugins: package_upload, product-id, search-disabled-repos, subscription-manager (….output truncated…) [id:rhel-7-server-extras-rpms Red Hat Enterprise Linux 7 Server - Extras (RPMs)] Available Packages Name : docker Arch : x86_64 Epoch : 2 Version : 1.13.1 Release : 103.git7f2769b.el7 Size : 65 M Repo : installed From repo : rhel-7-server-extras-rpms Summary : Automates deployment of containerized applications URL : https://github.com/docker/docker License : ASL 2.0 Description : Docker is an open-source engine that automates the deployment of any : application as a lightweight, portable, self-sufficient container that will : run virtually anywhere.
Finally, run the command
yum install <RPM>
to install the RPM. This will validate that your RHSS has a local copy of the actual RPM and any dependencies. Again, repeat the command for each of the RPMs:ansible
atomic-openshift-hyperkube
docker
-bash-4.2# yum install docker 2> /dev/null Loaded plugins: enabled_repos_upload, package_upload, product-id, search- : disabled-repos, subscription-manager rhel-7-server-extras-rpms | 2.0 kB 00:00 rhel-7-server-rpms | 2.0 kB 00:00 rhel-7-server-satellite-tools-6.4-rpms | 2.1 kB 00:00 Resolving Dependencies --> Running transaction check ---> Package docker.x86_64 2:1.13.1-103.git7f2769b.el7 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: docker x86_64 2:1.13.1-103.git7f2769b.el7 rhel-7-server-extras-rpms 18 M Transaction Summary ================================================================================ Install 1 Package Total download size: 18 M Installed size: 65 M Is this ok [y/d/N]: y Downloading packages: docker-1.13.1-103.git7f2769b.el7.x86_64.rpm | 18 MB 00:00 Running transaction check Running transaction test Transaction test succeeded Running transaction Installing : 2:docker-1.13.1-103.git7f2769b.el7.x86_64 1/1 Uploading Package Profile Verifying : 2:docker-1.13.1-103.git7f2769b.el7.x86_64 1/1 Installed: docker.x86_64 2:1.13.1-103.git7f2769b.el7 Complete!
You have completed the preparation of RHSS for deployment of RHOCP.
Create a Private Docker Registry
The deployment of RHOCP requires access to a Docker Registry containing the required Docker images. Red Hat provides access to those through registry.redhat.io
, however most Cloud Pak Systems do not allow the deployment of VMs with direct internet access. So, using a Private Docker Registry is required instead and populated with the Docker images for the RHOCP.
Deploy a Private Docker Registry on Cloud Pak System
Cloud Pak System has a pattern to simplify the creation of your own Private Docker Registry. If you already have a Private Docker Registry in place, you can skip this step and go to Populate the Private Docker Registry with OpenShift Docker images section.
Step 1. While logged on to IBM Cloud Pak System, go to Patterns > Virtual System Patterns and look for the pattern called Docker Private Registry, as shown in Figure 13. By default, it will deploy a VM with RHEL 7, Docker version 18.06.1-ce, and the image registry installed along with 50 GB of local storage under /var/docker-registrystorage
. This can all be modified, but that is beyond the scope of this tutorial.
Figure 13: Docker Private Registry virtual system pattern in IBM Cloud Pak System.
Step 2. Once deployed, log on to VM to confirm the version of RHEL and Docker.
-bash-4.2# docker --version
Docker version 18.06.1-ce, build e68fc7a
Also, validate that you can log on to the Docker Private Registry from the command line.
-bash-4.2# docker login -u root -p passw0rd ipas-pvm-233-035.purescale.raleigh.ibm.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Populate the Private Docker Registry with OpenShift Docker images
We assume that your Docker Private Registry does not have internet access. So you will need a VM/server to be able to populate your Private Docker Registry with the with RHOCP Docker images. This VM/server should meet the following requirements:
- Local installation of Docker
- Internet access to Docker Public Registries, in particular
registry.redhat.io
- Network access to your Docker Private Registry
Figure 14 illustrates how this VM/server will be used to pull the Docker images from registry.redhat.io
, tag them, and push them to your Docker Private Registry registry.my.domain
.
Figure 14: Populating your Docker Private Registry with RHOCP Docker images.
The commands below show how to push the RHOCP Docker image from registry.redhat.io/openshift3/apb-base:v3.11.104
, tag it, and then push it to your Private Docker Registry registry.my.domain
.
(Note: RHOCP is installed by Ansible, which uses the Docker images using just the major version, but also the major and minor version. That is why we have to tag each Docker image twice: once with the major version, and another time with the major and minor version.)
$ docker pull registry.redhat.io/openshift3/apb-base:v3.11.104
$ docker tag registry.redhat.io/openshift3/apb-base:v3.11.104 registry.my.domain/openshift3/apb-base:v3.11.104
$ docker tag registry.redhat.io/openshift3/apb-base:v3.11.104 registry.my.domain/openshift3/apb-base:v3.11
$ docker push registry.my.domain/openshift3/apb-base:v3.11.104
$ docker push registry.my.domain/openshift3/apb-base:v3.11
Once you have completed these steps for the RHOCP Docker images, you can validate if you can pull one of the images as shown below. When this is working fine, you can proceed to deployment of the RHOCP patterns.
-bash-4.2# docker login -u root -p ******** registry.my.domain
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
-bash-4.2# docker pull registry.my.domain/openshift3/apb-base:v3.11.104
v3.11.104: Pulling from openshift3/apb-base
256c6fd715f2: Pull complete
aa58f5fa817d: Pull complete
058abc0fb4a1: Pull complete
f125ddfe3974: Pull complete
Digest: sha256:d6b220d5df55dfbb17a2dda41da73167b4a9220bfa2c3fc4714007694db549fc
Status: Downloaded newer image for registry.my.domain/openshift3/apb-base:v3.11.104
-bash-4.2# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.my.domain /openshift3/apb-base v3.11.104 da232eb5a517 4 months ago 1.2GB
(Note: For more background information, refer to OpenShift documentation.)
Deployment
Available Red Hat OpenShift topologies
Before we deploy a new RHOCP cluster, let’s briefly discuss the terminology used to refer to the different kind of nodes.
Bastion node
This is the node used to trigger the installation of RHOCP. Often this node is used to interact with the RHOCP environment using command line utilities, as it is not recommended to perform any management activities from the master nodes.
Equivalent in IBM Cloud Private: ICP boot node
Master nodes
These are the Kubernetes master nodes that are the brains of the Kubernetes cluster. They run the OpenShift master components, including the API Server and etcd. The master components manage nodes in the Kubernetes cluster and schedule pods to run on nodes. An odd number of master nodes, such as three or five, is required for high availability, as quorum is needed for the etc distributed key/value store.
Equivalent in IBM Cloud Private: ICP master node
Compute nodes
These are the Kubernetes worker nodes, running the pods that host your applications. These nodes run containers created by the end users of the OpenShift cluster. Application workload is distributed across the compute nodes as determined by the OpenShift scheduler. For high availability, multiple replicas of an application container can be provisioned across the compute nodes. An OpenShift cluster contains a minimum of one compute node, but supports multiple to provide high availability for application containers deployed on the cluster.
Equivalent in IBM Cloud Private: ICP worker node
Infra nodes
These act as proxy nodes, forwarding incoming HTTP traffic to the pods on the application nodes that run your applications. Again, Kubernetes supports multiple infra nodes to provide high availability. These nodes can also be used to run RHOCP components such as the image registry and monitoring, and can also be used for optional components such as metering and logging components.
Equivalent in IBM Cloud Private: ICP proxy node
IBM Cloud Pak System includes virtual system patterns to support two topologies of the RHOCP, as shown in Figure 15.
Figure 15: IBM Cloud Pak System patterns for RHOCP.
OpenShift Container Platform Pattern – GlusterFS
This pattern deploys a RHOCP topology, as shown in Figure 16. Note that this pattern does not deploy a Bastion node, as the installation is done from the only master node.
Figure 16: Topology corresponding to the OpenShift Container Platform Pattern - GlusterFS
pattern.
OpenShift Container Platform with HA Pattern – GlusterFS
This pattern deploys a RHOCP topology, as shown in Figure 17. Note that unlike the other pattern, this uses a Bastion node to perform the installation on the other nodes. Once installation has been completed, this VM is no longer actively used.
Figure 17: Topology corresponding to the OpenShift Container Platform with HA Pattern - GlusterFS
pattern.
As indicated in Figure 17, GlusterFS storage is optional. As you will see further down in this tutorial, you can select GlusterFS or Custom at deployment time. If you select Custom, GlusterFS does not get configured and OpenShift will be installed without a persistent storage provider. You are then free to install and configure a storage provider of your own choice. For more details, refer to OpenShift documentation.
When opting for GlusterFS, note that it will be deployed inside containers on the OpenShift Compute nodes. So it shares the same nodes with any containerized applications you may deploy. Also, note that GlusterFS requires a minimum of three nodes due to the need for quorum to ensure consistency. Should you add additional Compute nodes afterwards, GlusterFS will stick to the original three Compute nodes originally deployed.
Deploy Red Hat OpenShift cluster
Step 1. Go to Patterns > Virtual System Patterns and look for the pattern called OpenShift Container Platform with HA Pattern – GlusterFS
. Click the deploy icon, as shown in Figure 18.
Figure 18: Deploying the OpenShift Container Platform with HA Pattern – GlusterFS
pattern.
Step 2. Enter or override the following pattern attributes:
Password (root): password for root user
Password (virtuser): password for virtuser user
OpenShift version: 3.11
OpenShift Registry Host Name: fully-qualified hostname of your Docker Private Registry
OpenShift Registry User Name: username for your Docker Private Registry
OpenShift Registry User Password: password for your Docker Private Registry
OpenShift Administrator: username for OpenShift admin user
OpenShift Administrator Password: password for OpenShift admin user
Step 3. Accept the default for the rest of the attributes. Make sure you are happy with the values on the panel on the left-hand side for the Name, Environment Profile, Cloud Group, IP Group, and Priority fields. Click Quick Deploy to start the deployment, as shown in Figure 19.
Figure 19: Starting deployment of the OpenShift Container Platform with HA Pattern – GlusterFS
pattern.
Step 4. Upon completion, your Virtual System Instance page should look as shown in Figure 20. Note within the History section that the final step of deployment takes more than 50 minutes to complete.
Figure 20: Deployed OpenShift Container Platform with HA Pattern – GlusterFS
pattern instance.
If deployment is not successful, you should log on to the Bastion node (the Bastion_Host
virtual machine) and review the /opt/IBM/OCP/logs/ocp.log
and /root/openshift-ansible.log
logs. For example, if there are issues pulling the RHOCP Docker images from your private registry, you will see these messages at the /root/openshift-ansible.log
:
2019-09-30 22:43:37,536 p=17368 u=root | Failure summary:
1. Hosts: ipas-pvm-233-012.purescale.raleigh.ibm.com, ipas-pvm-233-013.purescale.raleigh.ibm.com, ipas-pvm-233-014.purescale.raleigh.ibm.com, ipas-pvm-233-015.purescale.raleigh.ibm.com, ipas-pvm-233-016.purescale.raleigh.ibm.com, ipas-pvm-233-019.purescale.raleigh.ibm.com, ipas-pvm-233-021.purescale.raleigh.ibm.com
Play: OpenShift Health Checks
Task: Run health checks (install) - EL
Message: One or more checks failed
Details: check "docker_image_availability":
One or more required container images are not available:
registry.my.domain/openshift3/ose-deployer:v3.11,
registry.my.domain/openshift3/ose-docker- registry:v3.11,
registry.my.domain/openshift3/ose-haproxy-router:v3.11,
registry.my.domain/openshift3/ose-pod:v3.11,
registry.my.domain/openshift3/registry-console:v3.11
Checked with: skopeo inspect [--tls-verify=false] [--creds=<user>:<pass>] docker://<registry>/<image>
Step 5. From your newly deployed virtual system instance, expand the Bastion_Host
virtual machine, as shown in Figure 21.
Figure 21: Bastion_Host
of your newly deployed RHOCP virtual system instance.
Post deployment steps
Your Red Hat OpenShift Container Platform environment has seven routes to pods defined on its Infra (proxy) nodes. Logon to one of the three Master nodes and run the command oc get routes –all-namespaces to obtain those routes. We will need this information to ensure that the appropriate DNS hostnames are used when accessing the environment.
-bash-4.2# oc version
oc v3.11.98
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://ipas-pvm-233-011.purescale.raleigh.ibm.com:8443
openshift v3.11.98
kubernetes v1.11.0+d4cacc0
-bash-4.2# oc get routes --all-namespaces
NAMESPACE NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
app-storage heketi-storage heketi-storage-app-storage.appsipas-pvm-233-011.purescale.raleigh.ibm.com heketi-storage <all> None
default docker-registry docker-registry-default.appsipas-pvm-233-011.purescale.raleigh.ibm.com docker-registry <all> passthrough None
default registry-console registry-console-default.appsipas-pvm-233-011.purescale.raleigh.ibm.com registry-console <all> passthrough None
openshift-console console console.appsipas-pvm-233-011.purescale.raleigh.ibm.com console https reencrypt/Redirect None
openshift-monitoring alertmanager-main alertmanager-main-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com alertmanager-main web reencrypt None
openshift-monitoring grafana grafana-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com grafana https reencrypt None
openshift-monitoring prometheus-k8s prometheus-k8s-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com prometheus-k8s web reencrypt None
Note that each route here contains a Domain Name System, or DNS, name that is a concatenation of the following:
docker-registry-default
is the unique name for the service running in RHOCP being routed..appsipas-pvm-233-011.purescale.raleigh.ibm.com
is the unique subdomain for the service, derived from the fully qualified domain name, or FQDN, of the master VM (ipas-pvm-233-011.purescale.raleigh.ibm.com
) with.apps
in front.
In order for the RHOCP cluster to be used, these seven DNS entries must be mapped onto the three proxy nodes of the RHOCP cluster. Remember that the Proxy nodes route the traffic to the pods on the Application nodes.
Typically, the mapping of DNS entries onto Proxy nodes would be solved with a virtual IP address that load balances requests across the three proxy nodes. The seven DNS entries would point to the virtual IP address, ensuring a highly available solution. Alternatively, a single wildcard DNS entry *.appsipas-pvm-233-011.purescale.raleigh.ibm.com
pointing to the virtual IP address could also be considered. For instructions on how to configure wildcard DNS entries, refer to the Red Hat OpenShift Container Platform 3.11 documentation.
For the purpose of this tutorial, you should add the seven DNS entries to the local hosts file and point those to the first proxy node. Simply replace <ip_infra1>
with the actual IP address of your first proxy node and include the entries below in your local /etc/hosts
file (or equivalent on Windows: C:\Windows\System32\Drivers\etc\hosts
). Obviously, this makes the first proxy node a single point of failure.
<ip_infra1> heketi-storage-app-storage.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> docker-registry-default.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> registry-console-default.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> console.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> alertmanager-main-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> grafana-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com
<ip_infra1> prometheus-k8s-openshift-monitoring.appsipas-pvm-233-011.purescale.raleigh.ibm.com
Access your RHOCP cluster
Once you ensure that you have the right DNS entries in place, you should be able to access your RHOCP cluster.
Step 1. Click OpenShift Container Platform Console, as shown in Figure 22. This should take you to the RHOCP console on one of the three master nodes. (Note: In Cloud Pak System 2.3.0.0, there is a limitation with the Red Hat OpenShift cluster that gets deployed. The console link only takes you to one of the three master nodes; you cannot access the console from the other ones. This will be addressed in a future release.)
Figure 22: OpenShift Container Platform console link.
Step 2. Accept any TLS certificate warnings as your cluster has been configured with a self-signed certificate. Log on with the credentials you provided at the time of deployment of your RHOCP pattern, as shown in Figure 23.
Figure 23: OpenShift Container Platform console link.
Step 3. Once logged on, you should be redirected to the console page shown in Figure 24.
Figure 24: Service Catalog in the Red Hat OpenShift Container Platform console.
Step 4. From the drop-down menu at the top of the page, select Cluster Console, as shown in Figure 24.
Step 5. In the Cluster Console, you can explore the topology of your cluster. Click Administration > Nodes to show all the nodes of the cluster, as shown in Figure 25. Note that these match what was deployed as part of your RHOCP pattern.
Figure 25: List of nodes displayed in the Cluster Console.
Conclusion
With this tutorial, you should be able to quickly get your RHOCP clusters up and running. We hope this simplifies and accelerates your deployment of software on top of OpenShift, including IBM Cloud Paks and other software that are critical to your business success.
The authors would like to thank the following for their help creating this tutorial: Prasad Ganiga, Shajeer Mohammed, Mallanagouda Patil, and Joe Wigglesworth.