This tutorial is part of the Learning path: Deploying Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers.
Advanced scenarios
Topics in “Advanced scenarios” | Type |
---|---|
Securely access IBM Cloud services from Red Hat OpenShift Container Platform deployed on IBM Power Systems Virtual Server | Tutorial |
Securing Red Hat OpenShift Container Platform 4.x clusters and web-based deployments using IBM Power Systems Virtual Server | Tutorial |
Backing up etcd data from a Red Hat OpenShift Container Platform cluster to IBM Cloud Object Storage | Tutorial |
Change worker node count on a deployed Red Hat OpenShift Container Platform 4.x cluster on IBM Power Systems Virtual Servers | Tutorial |
Configure access to a Red Hat OpenShift cluster on a private network in IBM Power Systems Virtual Server | Tutorial |
Introduction
This tutorial shows how to access a Red Hat OpenShift cluster on a private network when IBM Cloud Direct Link is available to connect to the IBM Cloud services.
Prerequisites
To access a OpenShift cluster on a private network, you need to make sure that the following prerequisites are fulfilled:
- A running OpenShift Container Platform 4.x cluster deployed (refer to the Installing Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers tutorial for instructions). The cluster should be deployed on a private network with no outbound and inbound connectivity on the bastion node.
- IBM Cloud Direct Link configured.
- A Linux instance running in IBM Cloud Classic deployed.
Estimated time
The approximate time to setup the Linux instance running in IBM Cloud Classic and configure the firewall rules is 5-10 minutes.
Steps
For this tutorial, we have used a CentOS 7 Linux instance in IBM Cloud Classic. All network traffic is allowed on the private interface of the instance by configuring the right security groups.
Public interface allows only Secure Shell (SSH), Hypertext Transfer Protocol Secure (HTTPS), 6443, and outbound connectivity.
Perform the following steps to access the OpenShift cluster on a private network:
Install and configure the Squid proxy.
- Run the following command to install the Squid proxy:
sudo yum install squid –y
- Run the following command to start the firewall service:
sudo service firewalld start
- Run the following command to add the Squid proxy service to firewall:
sudo firewall-cmd --zone=public --add-service=squid –permanent
- Run the following command to enable the Squid proxy service:
sudo systemctl enable squid
Run the following command to restart the firewall service:
sudo systemctl restart firewalld
You will need the Squid proxy URL in later steps. The URL is typically of the format, https://
:3128. For example, if the instance private IP is 10.85.142.218, then the Squid proxy URL is: http://10.85.142.218:3128
- Run the following command to install the Squid proxy:
Configure routes on the IBM Cloud instance.
Run the following command to add network routes to the IBM Power Systems Virtual Server private network used by OpenShift:
sudo route add -net <powervs_private_nw_subnet> gw <ibm_cloud_instance_gw>
Example:
sudo route add -net 192.168.25.0/24 gw 10.85.142.193
Run the following command to list the route entries in the kernel routing table:
$ ip r default via 158.177.75.81 dev eth1 10.0.0.0/8 via 10.85.142.193 dev eth0 10.85.142.192/26 dev eth0 proto kernel scope link src 10.85.142.218 158.177.75.80/28 dev eth1 proto kernel scope link src 158.177.75.86 161.26.0.0/16 via 10.85.142.193 dev eth0 166.8.0.0/14 via 10.85.142.193 dev eth0 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth1 scope link metric 1003 192.168.25.0/24 via 10.85.142.193 dev eth0
Configure OpenShift to use Squid proxy running in the Linux instance in IBM Cloud Classic.
- Navigate to the directory in your system where you had run the openshift-install-powervs helper script.
Add the following Terraform variables in the var.tfvars file present in current working directory. Refer to the description of these variables for more details.
Set the IBM Cloud Direct Link endpoint network CIDR variable in the var.tfvars file. This is the private network subnet of the IBM Cloud instance.
ibm_cloud_dl_endpoint_net_cidr = ""
Example:
ibm_cloud_dl_endpoint_net_cidr = "10.0.0.0/8"
Set the IBM Cloud http/squid proxy URL variable in the var.tfvars file. This is the IP address of the Linux instance running in IBM Cloud Classic.
ibm_cloud_http_proxy = ""
Example:
ibm_cloud_http_proxy = "http://10.85.142.218:3128"
Apply the changes.
openshift-install-powervs create
Verify the configuration details after successfully applying the changes.
Run the following command on bastion or any of the OpenShift nodes to list the route entries in the kernel routing table:
$ ip route default via 192.168.135.105 dev env2 proto static metric 100 **10.0.0.0/8 via 192.168.25.1 dev env3 proto static metric 101** 192.168.25.0/24 dev env3 proto kernel scope link src 192.168.25.172 metric 101 192.168.135.104/29 dev env2 proto kernel scope link src 192.168.135.106 metric 100
You will notice the IBM Cloud Direct Link endpoint CIDR listed in the output.
Run the following commands to fetch the proxy URLs used by your OpenShift cluster:
$ oc get proxy/cluster -o template --template {{.spec.httpProxy}} http://10.85.142.218:3128 $ oc get proxy/cluster -o template --template {{.spec.httpsProxy}} http://10.85.142.218:3128
You will notice both URLs pointing to the Squid proxy URL that is running on the Linux instance in IBM Cloud Classic.
Manage your OpenShift cluster using the OpenShift CLI terminal.
We can access the OpenShift cluster through the CLI and web console using the Linux instance running in IBM Cloud Classic in the following manner:
Note: Refer to the Getting started with the OpenShift CLI guide to install the OpenShift CLI.
Manage the cluster through CLI directly on the Linux instance running in IBM Cloud Classic.
You can log in to the OpenShift cluster through the CLI by adding the /etc/hosts entry with the private network IP address of the cluster (which is 192.168.25.147 in this tutorial).
192.168.25.147 api.test-ocp-6f2c.ibm.com console-openshift-console.apps.test-ocp-6f2c.ibm.com integrated-oauth-server-openshift- authentication.apps.test-ocp-6f2c.ibm.com oauth-openshift.apps.test-ocp-6f2c.ibm.com prometheus-k8s-openshift-monitoring.apps.test-ocp-6f2c.ibm.com grafana-openshift-monitoring.apps.test-ocp-6f2c.ibm.com example.apps.test-ocp-6f2c.ibm.com
To log in, run the
oc login
command and provide the username and password.$ oc login --server=https://api.api.test-ocp-6f2c.ibm.com:6443 Authentication required for https:// api.test-ocp-6f2c.ibm.com:6443 (openshift) Username: kubeadmin Password: Login successful. You have access to 59 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default".
Access the OpenShift cluster web console and CLI from the outside world.
Because the OpenShift cluster runs on a private network, you cannot access the cluster directly from outside the cluster. To access the cluster from an external server, we will make use of the public network available to the Linux instance running in IBM Cloud Classic.
The OpenShift API server listens on port 6443 and the OpenShift web console runs on the HTTPS port. We can now make use of the iptables utility on the Linux instance running in IBM Cloud Classic to setup IP packet filter rules. You can configure iptables so that traffic flowing into its public interface on port 443 and 6443 will be forwarded to the private interface.
Let us assume the following configuration on the OpenShift Cluster and the Linux instance running in IBM Cloud Classic:
OpenShift cluster
Private IP address: 192.168.25.147Linux instance running in IBM Cloud Classic
Private interface: eth0
Public interface: eth1
Private IP address: 10.85.142.218
Public IP address: 158.177.75.86To accept connections and allow traffic on both directions between the interfaces, run the following commands:
iptables -A FORWARD -i eth1 -o eth0 -p tcp --syn --dport 6443 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -p tcp --syn --dport 443 -m conntrack --ctstate NEW -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
Network Address Translation (NAT) is a process used to map IP addresses from one group to another and is usually used to route traffic between two or more networks. In this tutorial, we will make use of NAT to correctly route the traffic on the public interface to the private network for port 6443.
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 6443 -j DNAT --to-destination 192.168.25.147 iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 6443 -d 192.168.25.147 -j SNAT --to-source 10.85.142.218
Add similar NAT rules to direct the traffic to the right IP address for the HTTPS port.
iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 443 -j DNAT --to-destination 192.168.25.147 iptables -t nat -A POSTROUTING -o eth0 -p tcp --dport 443 -d 192.168.25.147 -j SNAT --to-source 10.85.142.218
To access the cluster, add the host entries on your external server using the public IP address of the Linux instance running in IBM Cloud Classic.
For Linux and Mac hosts, the file is located at /etc/hosts, and for Microsoft Windows hosts, it is located at c:\Windows\System32\Drivers\etc\hosts.
158.177.75.86 api.test-ocp-6f2c.ibm.com console-openshift-console.apps.test-ocp-6f2c.ibm.com integrated-oauth-server-openshift- authentication.apps.test-ocp-6f2c.ibm.com oauth-openshift.apps.test-ocp-6f2c.ibm.com prometheus-k8s-openshift-monitoring.apps.test-ocp-6f2c.ibm.com grafana-openshift-monitoring.apps.test-ocp-6f2c.ibm.com example.apps.test-ocp-6f2c.ibm.com
Now, run the
oc login
command on the external server:$ oc login --server=https://api.api.test-ocp-6f2c.ibm.com:6443 Authentication required for https:// api.test-ocp-6f2c.ibm.com:6443 (openshift) Username: kubeadmin Password: Login successful. You have access to 59 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default".
You should be able to access the web console as well by entering the OpenShift console URL in the browser of your external server.
Summary
This tutorial described how to set up a Squid proxy and configure routes on a Linux instance running in IBM Cloud Classic for your OpenShift cluster. As a final step, we set up the OpenShift cluster to use the Squid proxy and modified the firewall rules to access the cluster using the Linux instance running in IBM Cloud Classic.