Kubernetes with OpenShift World Tour: Get hands-on experience and build applications fast! Find a workshop!

Use ssh to connect to your nodes on a managed Kubernetes cluster

Both Kubernetes and OpenShift are excellent choices for developers who want all the advantages of using container orchestration. However, if you don’t want to handle the nastier bits like installing or upgrading you cluster, IBM offers managed solutions.

As fans of managed services, we use them almost exclusively where possible. However, we noticed that the underlying hardware (the virtual machine, or VM), that is visible in the IBM Cloud dashboard is not accessible when you are using Red Hat OpenShift on IBM Cloud or the IBM Cloud Kubernetes Service. We can’t add an SSH keypair to it to start manipulating the master and worker nodes of our cluster. This is probably for a good reason!

The following screen capture shows the ID of one of our OpenShift cluster workers is the same name as a VM.

Screen capture of the ID of OpenShift cluster workers

You can’t directly use SSH to access the VMs running your cluster. For example, you might want to use the sysctl tool to tune change Linux kernel settings. We recently had to change the kernel’s semaphore settings. However, you can do some Kubernetes magic to get access to the VMs running your cluster

Prerequisites

To follow along with this tutorial, you need an IBM Cloud Kubernetes Service cluster or a Red Hat OpenShift on IBM Cloud cluster.

Estimated time

Assuming a cluster is available, this tutorial should only take about 10 minutes.

Steps

The first thing you need to do it create a file called inspect-node.yaml. Just update the last line of the file, replace WORKER NODE NAME with the name of the node you want to have access to.

See the following example inspect-node.yaml file:

apiVersion: v1
kind: Pod
metadata:
  name: inspectnode164121
  labels:
    name: inspectnode
  namespace: kube-system
spec:
  tolerations:
    - operator: "Exists"
  hostNetwork: true
  containers:
    - name: inspectnode164121
      image: nkkashyap/inspectndoe:v001
      volumeMounts:
      - mountPath: /host/root
        name: host-root
      - mountPath: /host/etc
        name: host-etc
      - mountPath: /host/log
        name: host-log
      - mountPath: /host/local
        name: host-local
      - mountPath: /run/systemd
        name: host-systemd

  volumes:
  - name: host-root
    hostPath:
      path: /root/
  - name: host-etc
    hostPath:
      path: /etc
  - name: host-log
    hostPath:
      path: /var/log
  - name: host-local
    hostPath:
      path: /usr/local
  - name: host-systemd
    hostPath:
      path: /run/systemd

  nodeSelector:
    kubernetes.io/hostname: WORKER NODE NAME

Now run through the following commands. The examples in this tutorial run the commands on a VM.

  1. If using OpenShift, switch to the where you want to run this pod:

    $ oc project kube-system
    
  2. Create the pod with kubectl create:

    $ kubectl create -f ./inspect-node.yaml
    
  3. Use kubectl exec to access the container:

    $ kubectl exec -it inspectnode164121 /bin/bash
    
  4. Notice in the following example that the prompt has changed from the developer’s terminal to the container. The container has only two files, entrypoint and systemutil. The examples in this tutorial use systemutil:

    root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ls
    entrypoint  systemutil
    
  5. From inside the pod, enable the SSH login for the root by using sed:

    root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /host/etc/ssh/sshd_config
    
  6. Restart the SSH daemon by using the provided systemutil script:

    root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ./systemutil  -service sshd.service
    Selected Option:   restart
    Info: Unit Restarted !!
    
  7. Now, still from within the container, attempt to SSH to worker node. Note that if you are successful your bash prompt changes again:

    root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ssh root@localhost
    ECDSA key fingerprint is SHA256:b7TAiZXYHtTRP0bSC6MeLKMyKiHp2cqYvnh9rNtnpag.
    Are you sure you want to continue connecting (yes/no)? yes
    
  8. Now you can tune or modify an node in your cluster. In the example use case in this tutorial you see the sysctl commands for changing system settings and tinkering with SELinux.

Summary

Now that you walked through our examples of how to get access to nodes on a Kubernetes cluster in a managed environment, go ahead and try it yourself. We hope this tutorial gets you out of a tight spot. Happy hacking!

We send a big thank you to Phil Estes, Richard Theis, and Neeraj Kumar Kashyap for educating us on this topic.

Steve Martinelli
Spencer Krum