2021 Call for Code Awards: Live from New York, with SNL’s Colin Jost! Learn more

Securely access IBM Cloud services from Red Hat OpenShift Container Platform deployed on IBM Power Systems Virtual Server

This tutorial is part of the Learning path: Deploying Red Hat OpenShift Container Platform 4.x on IBM Power Systems Virtual Servers.

Topics in “Advanced scenarios” Type
Securely access IBM Cloud services from Red Hat OpenShift Container Platform deployed on IBM Power Systems Virtual Server Tutorial
Securing Red Hat OpenShift Container Platform 4.x clusters and web-based deployments using IBM Power Systems Virtual Server Tutorial
Backing up etcd data from a Red Hat OpenShift Container Platform cluster to IBM Cloud Object Storage Tutorial
Change worker node count on a deployed Red Hat OpenShift Container Platform 4.x cluster on IBM Power Systems Virtual Servers Tutorial
Configure access to a Red Hat OpenShift cluster on a private network in IBM Power Systems Virtual Server Tutorial
Deploy containerized applications on Red Hat OpenShift for IBM Power Systems Tutorial
Use IBM Cloud Application Load Balancer for VPC and IBM Cloud DNS Services with Red Hat OpenShift Tutorial
Set up Red Hat Advanced Cluster Management for Kubernetes on IBM Power Systems Virtual Server Tutorial

Introduction

This tutorial shows how to access IBM® Cloud® services using IBM Cloud Direct Link from your applications deployed on Red Hat® OpenShift® in IBM® Power Systems™ Virtual Server.

The IBM Cloud Direct Link service allows access to IBM Cloud resources over a private network from the Power Systems Virtual Server instance. The Power Systems Virtual Server offering includes a highly available 5 Gbps connection to IBM Cloud services at no cost for each customer per data center.

You can read more details about the service in the following link: https://cloud.ibm.com/docs/power-iaas?topic=power-iaas-ordering-direct-link-connect

The following figure describes the reference deployment:

img1

Tutorial prerequisites

You need to make sure that the following prerequisites are fulfilled before performing the steps in this tutorial:

Estimated time

The approximate time to deploy the sample applications is 5-10 min.

Steps

First we’ll configure Squid proxy and routes on the IBM Cloud Classic Linux instance. This acts as the gateway for OpenShift applications deployed on IBM Power Systems Virtual Server to access IBM Cloud services.

For this tutorial, we have used a CentOS 7 Linux instance in IBM Cloud Classic. All network traffic is allowed on the private interface of the instance by configuring the right security groups. Public interface only allows Secure Shell (SSH), Hypertext Transfer Protocol Secure (HTTPS), and outbound connectivity.

The following screen capture shows the security group settings for the IBM Cloud Linux instance used for this tutorial.

img2
View a larger version of the figure

  1. Install and configure Squid proxy.

    1. Run the following command to install Squid proxy:

      sudo yum install squid –y
      
    2. Run the following command to start the firewall service:

      sudo service firewalld start
      
    3. Run the following command to add the Squid proxy service to firewall:

      sudo firewall-cmd --zone=public --add-service=squid –permanent
      
    4. Run the following command to enable the Squid proxy service:

      sudo systemctl enable squid
      
    5. Run the following command to restart the firewall service:

      sudo systemctl restart firewalld
      

    You will need the Squid proxy URL in later steps. The URL is typically of the format, https://<Private-IP>:3128. For example, if the instance private IP is 10.85.142.218, then the Squid proxy URL is: http://10.85.142.218:3128

    If you are configuring High Availability (HA) for Squid proxy using keepalived, then ensure you enable Virtual Router Redundancy Protocol (VRRP) in the security group and the Linux instances.

    firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
    firewall-cmd --reload
    
  2. Configure routes on the IBM Cloud instance.

    1. Run the following command to add network routes to the Power Systems Virtual Server private network used by OpenShift:

      sudo route add -net <powervs_private_nw_subnet> gw <ibm_cloud_instance_gw>
      

      Example:

      sudo route add -net 192.168.25.0/24 gw 10.85.142.193
      
    2. Run the following command to list the route entries in the kernel routing table and verify:

      $ ip r
      
      default via 158.177.75.81 dev eth1
      10.0.0.0/8 via 10.85.142.193 dev eth0
      10.85.142.192/26 dev eth0 proto kernel scope link src 10.85.142.218
      158.177.75.80/28 dev eth1 proto kernel scope link src 158.177.75.86
      161.26.0.0/16 via 10.85.142.193 dev eth0
      166.8.0.0/14 via 10.85.142.193 dev eth0
      169.254.0.0/16 dev eth0 scope link metric 1002
      169.254.0.0/16 dev eth1 scope link metric 1003
      192.168.25.0/24 via 10.85.142.193 dev eth0
      
  3. Configure OpenShift to use Squid proxy running in the IBM Cloud Linux instance.

    1. Navigate to the directory in your system where you had run the openshift-install-powervs helper script.
    2. Add the following Terraform variables in var.tfvars present in current working directory. Refer to the description of these variables for more details.

      Set the IBM Cloud Direct Link endpoint network CIDR variable in the var.tfvars file. This is the private network subnet of the IBM Cloud instance.

      ibm_cloud_dl_endpoint_net_cidr = ""
      

      Example:

      ibm_cloud_dl_endpoint_net_cidr = "10.0.0.0/8"
      

      Set the IBM Cloud http/squid proxy URL variable in the var.tfvars file. This is the IP address of the IBM Cloud Linux instance.

      ibm_cloud_http_proxy = ""
      

      Example:

      ibm_cloud_http_proxy = "http://10.85.142.218:3128"
      
    3. Apply the changes.

       openshift-install-powervs create
      

      Verify the configuration details after successfully applying the changes.

      Run the following command on bastion or any of the OpenShift nodes to list the route entries in the kernel routing table:

        $ ip route
      
        default via 192.168.135.105 dev env2 proto static metric 100
      
        **10.0.0.0/8 via 192.168.25.1 dev env3 proto static metric 101**
      
        192.168.25.0/24 dev env3 proto kernel scope link src 192.168.25.172 metric 101
      
        192.168.135.104/29 dev env2 proto kernel scope link src 192.168.135.106 metric 100
      

      You will notice the IBM Cloud Direct Link endpoint CIDR listed in the output.

      Run the following commands to fetch the proxy URLs used by your OpenShift cluster:

       $ oc get proxy/cluster -o template --template {{.spec.httpProxy}}
      
       http://10.85.142.218:3128
      
       $ oc get proxy/cluster -o template --template {{.spec.httpsProxy}}
      
       http://10.85.142.218:3128
      

      You will notice both URLs point to the Squid proxy URL that is running on the IBM Cloud Linux instance.

  4. Verify if you have access to the IBM Cloud service.

    We’ll use a simple curl application to verify access to IBM Cloud Object Storage from the OpenShift cluster.

    1. Log in to the IBM Cloud dashboard using the URL: https://cloud.ibm.com

    2. In the Resource summary section, click Storage to open the Resource list page. This page lists the storage instances deployed under your account. Click storage instance that you want to use. For this tutorial, we have created the cos-validation-team Cloud Object Storage instance.

      img3
      View a larger version of the figure

    3. Click Buckets and make a note of the bucket name.

      We’ll be using bucket-validation-team.

      img4
      View a larger version of the figure

    4. Click Endpoints and make a note of the private endpoint.

      For our tutorial, we have used the private endpoint (s3.private.eu-de.cloud-object-storage.appdomain.cloud) from the eu-de region where we have our remaining setup.

      img5
      View a larger version of the figure

    5. Use SSH to connect to the bastion node or go to the system where you have configured the oc client to access the OpenShift cluster to perform the next set of steps.

      • Create a YAML file named cos-creds.yaml describing the Secret object.

        Update <IBM_CLOUD_API_KEY> with your API key.

        <cos-creds.yaml>

        apiVersion: v1
        kind: Secret
        metadata:
           name: cos-creds
        type: Opaque
        data:
           apikey: <IBM_CLOUD_API_KEY>
        
      • Create a script file named upload-text-file.sh with the following content.

        <upload-text-file.sh>

        #!/bin/sh
        # Create dummy file
        FILENAME="/tmp/dummy-file"
        echo "This is a dummy file to test connectivity" > $FILENAME
        
        TOKEN=$(curl -X "POST" "https://iam.cloud.ibm.com/oidc/token" -H "Accept: application/json" -H "Content-Type: application/x-www-form-urlencoded" --data-urlencode "apikey=$APIKEY" --data-urlencode "response_type=cloud_iam" --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" | jq .access_token)
        ACCESSTOKEN=$(eval echo $TOKEN)
        
        OBJKEY="dummy-object"
        curl -X "PUT" "https://$ENDPOINT/$BUCKET/$OBJKEY"  -H "Authorization: bearer $ACCESSTOKEN" -H "Content-Type: file"  -F "file=@$FILENAME"
        echo "uploaded $FILENAME to cloud object storage"
        
      • Create a YAML file named upload-job.yaml describing the job object.

        Make a note of the ENDPOINT, HTTP_PROXY, and HTTPS_PROXY environment variable values.

        <upload-job.yaml>

        apiVersion: batch/v1
        kind: Job
        metadata:
        name: upload
        spec:
        template:
            spec:
            containers:
            - name: upload
              image: quay.io/bpradipt/curl-jq
              args: ["-c", "/upload-script/upload-text-file.sh"]
              env:
              - name: APIKEY
                valueFrom:
                    secretKeyRef:
                      name: cos-creds
                      key: apikey
              - name: ENDPOINT
                value: s3.private.eu-de.cloud-object-storage.appdomain.cloud
              - name: BUCKET
                value: bucket-validation-team
              - name: HTTPS_PROXY
                value: http://10.85.142.218:3128
              - name: HTTP_PROXY
                value: http://10.85.142.218:3128
              volumeMounts:
                - mountPath: /upload-script
                  name: upload-script
            restartPolicy: Never
            volumes:
            - name: upload-script
              configMap:
                name: upload-text-file
                defaultMode: 0777
        backoffLimit: 2
        
      • Create the objects in the OpenShift cluster.

        oc create -f  cos-creds.yaml
        oc create configmap upload-text-file --from-file=./upload-text-file.sh
        oc create -f upload-job.yaml
        
        $ oc get pods
        NAME                        READY   STATUS    RESTARTS   AGE
        upload-cjqlf                1/1     Running   0          5s 
        $  oc logs upload-cjqlf
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                        Dload  Upload   Total   Spent    Left  Speed
        100  2679  100  2544  100   135   7416    393 --:--:-- --:--:-- --:--:--  7810
        % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                        Dload  Upload   Total   Spent    Left  Speed
        100   245    0     0  100   245      0    356 --:--:-- --:--:-- --:--:--   355
        uploaded /tmp/dummy-file to cloud object storage
        
    6. Verify if the file has been uploaded to the bucket from the dashboard.

      img6
      View a larger version of the figure

Summary

This tutorial described how to access IBM Cloud services from an OpenShift cluster on IBM Power Systems Virtual Server with IBM Cloud Direct Link enabled. You can use the same approach for accessing other IBM Cloud services.

In a follow-on tutorial, we’ll describe a similar approach for accessing IBM Watson Machine Learning Accelerator in IBM Cloud Pak® for Data.