Use SSH to connect to your nodes on a managed Kubernetes cluster
Get shell access with a daemonset to a cluster on Red Hat OpenShift on IBM Cloud
Both Kubernetes and OpenShift are excellent choices for developers who want all the advantages of using container orchestration. However, if you don’t want to handle the nastier bits like installing or upgrading you cluster, IBM offers managed solutions.
As fans of managed services, we use them almost exclusively where possible. However, we noticed that the underlying hardware (the virtual machine, or VM), that is visible in the IBM Cloud dashboard is not accessible when you are using Red Hat OpenShift on IBM Cloud or the IBM Cloud Kubernetes Service. We can’t add an SSH keypair to it to start manipulating the master and worker nodes of our cluster. This is probably for a good reason!
The following screen capture shows the ID of one of our OpenShift cluster workers is the same name as a VM.
You can’t directly use SSH to access the VMs running your cluster. For example, you might want to use the
sysctl tool to tune change Linux kernel settings. We recently had to change the kernel’s semaphore settings. However, you can do some Kubernetes magic to get access to the VMs running your cluster
Assuming a cluster is available, this tutorial should only take about 10 minutes.
The first thing you need to do it create a file called
inspect-node.yaml. Just update the last line of the file, replace
WORKER NODE NAME with the name of the node you want to have access to.
See the following example
apiVersion: v1 kind: Pod metadata: name: inspectnode164121 labels: name: inspectnode namespace: kube-system spec: tolerations: - operator: "Exists" hostNetwork: true containers: - name: inspectnode164121 image: nkkashyap/inspectndoe:v001 volumeMounts: - mountPath: /host/root name: host-root - mountPath: /host/etc name: host-etc - mountPath: /host/log name: host-log - mountPath: /host/local name: host-local - mountPath: /run/systemd name: host-systemd volumes: - name: host-root hostPath: path: /root/ - name: host-etc hostPath: path: /etc - name: host-log hostPath: path: /var/log - name: host-local hostPath: path: /usr/local - name: host-systemd hostPath: path: /run/systemd nodeSelector: kubernetes.io/hostname: WORKER NODE NAME
Now run through the following commands. The examples in this tutorial run the commands on a VM.
If using OpenShift, switch to the where you want to run this pod:
$ oc project kube-system
Create the pod with
$ kubectl create -f ./inspect-node.yaml
kubectl execto access the container:
$ kubectl exec -it inspectnode164121 /bin/bash
Notice in the following example that the prompt has changed from the developer’s terminal to the container. The container has only two files,
systemutil. The examples in this tutorial use
root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ls entrypoint systemutil
From inside the pod, enable the SSH login for the root by using
root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# sed -i 's/PermitRootLogin no/PermitRootLogin yes/g' /host/etc/ssh/sshd_config
Restart the SSH daemon by using the provided
root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ./systemutil -service sshd.service Selected Option: restart Info: Unit Restarted !!
Now, still from within the container, attempt to SSH to worker node. Note that if you are successful your bash prompt changes again:
root@kube-blnsdved0472n8ongslg-aidacpdtank-default-0000056c:~# ssh root@localhost ECDSA key fingerprint is SHA256:b7TAiZXYHtTRP0bSC6MeLKMyKiHp2cqYvnh9rNtnpag. Are you sure you want to continue connecting (yes/no)? yes
Now that you walked through our examples of how to get access to nodes on a Kubernetes cluster in a managed environment, go ahead and try it yourself. We hope this tutorial gets you out of a tight spot. Happy hacking!