Overview

Skill Level: Advanced

This article documents the steps required to quickly install Kubernetes with end to end encryption. The purpose of the article is learning and to get an idea of the components that constitute Kubernetes, their interactions and the installation procedure.

Ingredients

The installation requires 3 VM's in the same network, for configuring a Master, Worker and Etcd node. The installation requires Docker to be installed and running on the worker node. Docker version 17.12.1-ce was used for this installation.Red Hat Enterprise Linux Server release 7.5 machines with harware configuration of 8 CPU, 16 GB RAM were used for this installation.

Binaries of ETCD (etcd-v3.3.12-linux-amd64.tar.gz), Kubernetes  version 1.10.0 (kubernetes-server-linux-amd64.tar.gz,kubernetes-client-darwin-amd64.tar.gz)  and Flannel libraries (https://github.com/coreos/flannel/releases/download/v0.7.0/flanneld-amd64) has to be downloaded from internet. 

A CA cert-key pair must be created that will be used to sign the rest of the cluster certificates.

This article does not explain Kubernetes architecture or concepts and assumes basic knowledge about PKI.

Step-by-step

  1. Setting up Etcd

    Etcd is an open-source distributed key-value store that serves as the backbone of distributed systems by providing a canonical hub for cluster coordination and state management.Kubernetes components like cluster services, addons and network plugins  use Etcd for storing their configuration.

    Unzip etcd-v3.3.12-linux-amd64.tar.gz to obtain the Etcd binaries. Generate key-cert pair for the Etcd system and configure it to run as a systemd service as shown below.

    cat /etc/systemd/system/etcd.service

    [Unit]Description=etcd 
    [Service]ExecStart=/usr/local/bin/etcd \
      --cert-file=/opt/etcddir/etcd.crt \
      --key-file=/opt/etcddir/etcd.key \
      --trusted-ca-file=/opt/etcddir/ca.pem \
      --client-cert-auth \
      --listen-client-urls=https://x.xx.xxx.xxx:2379,https://127.0.0.1:2379,https://x.xx.xxx.xxx:4001 \
      --advertise-client-urls=https://x.xx.xxx.xxx:2379,https://x.xx.xxx.xxx:4001
     [Install]
    WantedBy=multi-user.target

     

  2. Setting up Flannel overlay network

    The cluster overlay network is implemented using Flannel.An overlay network obscures the underlying network architecture from the pod network through traffic encapsulation (for example vxlan). We start with creating the FlannelD configuration in etcd which FlannelD uses as backend storage.

    ./etcdctl set /coreos.com/network/config '{ "Network": "10.1.0.0/16", "SubnetLen": 24, "Backend": {"Type": "vxlan"} }'

    Download the Flannel library from https://github.com/coreos/flannel/releases/download/v0.7.0/flanneld-amd64 to  any directory on both the master and worker node. Create a directory  /var/lib/flanneld/networks.

    The following network configurations were used.

    network_cidr: 10.1.0.0/16

    service_cluster_ip_range: 10.0.0.1/24

    kube_dns=10.0.0.10

    Run the Flannel service on both the master and worker nodes as shown below from the directory to which it was downloaded to. Note that the certificates of Etcd are used here so that Flannel could communicate securely with Etcd.

    ./flanneld-amd64 -etcd-certfile etcd.crt -etcd-keyfile etcd.key -etcd-endpoints https://x.xx.xxx.xxx:2379 
    -etcd-cafile ca.pem -ip-masq=true -subnet-dir=/var/lib/flanneld/networks -subnet-file=/var/lib/flanneld/subnet.env

     

     

     

  3. Configuring the Master Node

    A master node is a node which controls and manages a set of worker nodes (workloads runtime) and resembles a cluster in Kubernetes. A master node has the following components to help manage worker nodes:

    Kube-APIServer, which acts as the frontend to the cluster. All external communication to the cluster is via the API-Server.
    Kube-Controller-Manager, which runs a set of controllers for the running cluster. The controller-manager implements governance across the cluster.
    Etcd, the cluster state database. This component has been set up separately. 
    Kube Scheduler, which schedules activities to the worker nodes based on events occurring on the etcd. It also holds the nodes resources plan to determine the proper action for the triggered event. For example the scheduler would figure out which worker node will host a newly scheduled POD.

    unzip kubernetes-server-linux-amd64.tar.gz to get the components of the master node. The executables have been copied to /usr/local/bin for configuring systemd service.

    • Setting up Kube-APIServer

     Generate the key-cert pair for the API Server. Set the IP and all possible DNS names of the API server in the certificate using subjectAltName parameter in the configuration information while generating a certificate request. In this setup the Kube-apiserver is run as a systemd service. The details of the service are as shown below. 

    cat /etc/systemd/system/kube-apiserver.service

    [Unit]Description=Kubernetes API Server

    [Service]ExecStart=/usr/local/bin/kube-apiserver \
    --etcd-servers=https://x.xx.xxx.xxx:2379 \
    --service-cluster-ip-range=10.0.0.1/24 \
    --allow-privileged=true \
    --authorization-mode=RBAC,Node \
    --client-ca-file=/opt/Kuber/srv/kubernetes/ca.pem \
    --tls-cert-file=/opt/Kuber/srv/kubernetes/apiserver.pem \
    --tls-private-key-file=/opt/Kuber/srv/kubernetes/apiserver-key.pem \
    --etcd-cafile=/opt/Kuber/Flannel/ca.pem \
    --etcd-keyfile=/opt/Kuber/Flannel/etcd.key \
    --etcd-certfile=/opt/Kuber/Flannel/etcd.crt \
    --service-account-key-file=/opt/Kuber/srv/kubernetes/service-account.pem \
    --kubelet-https=true \
    --kubelet-client-certificate=/opt/Kuber/srv/kubernetes/complex3.pem \
    --kubelet-client-key=/opt/Kuber/srv/kubernetes/complex3-key.pem \
    --disable-admission-plugins=ServiceAccount \
    --v=2

    [Install]  
    •   Setting up  Kube-Controller-Manager

    Generate key certificate pair for the controller manager. The Kubernetes controller manager leverages a key pair to generate and sign service account tokens. So a key pair has to be generated for service accounts which is referred by the controller manager and API server.  Generate the kuebconfig file which is used by the controller to communicate with the API server.

    The below script may be used to generate kubeconfig file.

    for user in kube-controller-manager
    do
    TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
     /opt/Kuber/kubernetes/server/bin/kubectl config set-cluster kubernetes.default
    --certificate-authority=/opt/Kuber/srv/kubernetes/ca.pem --embed-certs=true
    --server=https://xxx.xx.xxx.xxx:6443 --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-credentials system:kube-controller-manager
    --client-certificate=/opt/Kuber/srv/kubernetes/${user}.pem --client-key=/opt/Kuber/srv/kubernetes/${user}-key.pem
    --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-context kubernetes.default --cluster=kubernetes.default
    --user=system:kube-controller-manager --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config use-context kubernetes.default
    --kubeconfig=/var/lib/${user}/kubeconfig

    done

    Run the controller as systemd service.

    cat /etc/systemd/system/kube-controller-manager.service
    [Unit]Description=kubernetes controller manager 

    [Service]ExecStart=/usr/local/bin/kube-controller-manager \
      --kubeconfig=/var/lib/kube-controller-manager/kubeconfig \
      --service-account-private-key-file=/opt/Kuber/srv/kubernetes/service-account-key.pem \
      --cluster-signing-cert-file=/opt/Kuber/srv/kubernetes/ca.pem \
      --cluster-signing-key-file=/opt/Kuber/srv/kubernetes/ca-key.pem \
      --use-service-account-credentials=true \
      --root-ca-file=/opt/Kuber/srv/kubernetes/ca.pem \
    #  --leader-elect=true \
      --service-cluster-ip-range=10.0.0.1/24 \
      --cluster-cidr=10.1.0.0/16 \
      --v=2

     [Install]
    •  Setting up  Kube-Scheduler

    Run the Kube Scheduler as systemd service. Generate the key certifcate pair and  kubeconfig file for the scheduler.

    cat /etc/systemd/system/kube-scheduler.service
    [Unit]Description=Kubernetes Scheduler

    [Service]ExecStart=/usr/local/bin/kube-scheduler \
      --config=/opt/Kuber/kubernetes/kube-scheduler.yaml \
      --v=2

    [Install]

    ——————–Contents of the kube-scheduler.yaml file———————

    apiVersion: componentconfig/v1alpha1
    kind: KubeSchedulerConfiguration
    clientConnection:
      kubeconfig: "/var/lib/kube-scheduler/kubeconfig"
    leaderElection:
      leaderElect: true

    Kubeconfig file can be generated using this script

    for user in kube-scheduler
    do
    TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
     /opt/Kuber/kubernetes/server/bin/kubectl config set-cluster kubernetes.default
    --certificate-authority=/opt/Kuber/srv/kubernetes/ca.pem --embed-certs=true
    --server=https://xxx.xx.xxx.xxx:6443 --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-credentials system:kube-scheduler
    --client-certificate=/opt/Kuber/srv/kubernetes/${user}.pem --client-key=/opt/Kuber/srv/kubernetes/${user}-key.pem
    --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-context kubernetes.default --cluster=kubernetes.default
    --user=system:kube-scheduler --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config use-context kubernetes.default
    --kubeconfig=/var/lib/${user}/kubeconfig
    done

     

  4. Configuring the Worker Node

    The worker node otherwise called as minion runs the the pods that are the components of the application. The services on a node, include the container runtime,kubelet and kube-proxy.

    unzip kubernetes-server-linux-amd64.tar.gz to get the components of the worker node. The executables have been copied to /usr/local/bin for configuring systemd service.

    • Docker container runtime has been installed as prereq
    • Setting up Kubelet

    Generate the kubelet client certificates. Kubernetes uses a special-purpose authorization mode called Node Authorizer, that specifically authorizes API requests made by Kubelets. In order to be authorized by the Node Authorizer, Kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>. So the certificate CN should be set up as <system:node:<nodeNameame>/O=system:nodes>.

    kubeconfig file could be generated using the below script.

    for user in kubelet
    do
    TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
     /opt/Kuber/kubernetes/server/bin/kubectl config set-cluster kubernetes.default
    --certificate-authority=/opt/Kuber/srv/kubernetes/ca.pem --embed-certs=true --server=https://172.16.209.215:6443
    --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-credentials system:node:complex3.fyre.ibm.com 
    --client-certificate=/opt/Kuber/kubernetes/worker-certs/complex3.pem
    --client-key=/opt/Kuber/kubernetes/worker-certs/complex3-key.pem --embed-certs=true
    --token=$TOKEN --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-context kubernetes.default
    --cluster=kubernetes.default --user=system:node:complex3.fyre.ibm.com   --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config use-context kubernetes.default
    --kubeconfig=/var/lib/${user}/kubeconfig
    done

    Run the Kubelet as systemd service.

    cat << EOF > /etc/systemd/system/kubelet.service

    [Unit]Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service

    [Service]ExecStart=/usr/local/bin/kubelet \
      --config=/opt/Kuber/kubernetes/kubelet-config.yaml \
      --container-runtime=docker \
      --container-runtime-endpoint=unix:///var/run/docker/containerd/docker-containerd.sock \
      --image-pull-progress-deadline=2m \
      --kubeconfig=/var/lib/kubelet/kubeconfig \
      --register-node=true \
      --fail-swap-on=false \
    #  --allow-privileged=true \
      --cluster-dns=10.0.0.10 \
      --v=2

    [Install]WantedBy=multi-user.target

    ——————–Contents of the kubelet-config.yaml——————-

    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
      x509:
        clientCAFile: "/opt/Kuber/srv/kubernetes/ca.pem"
    authorization:
      mode: Webhook
    clusterDomain: "kubernetes.default"
    runtimeRequestTimeout: "15m"
    tlsCertFile: "/opt/Kuber/kubernetes/worker-certs/complex3.pem"
    tlsPrivateKeyFile: "/opt/Kuber/kubernetes/worker-certs/complex3-key.pem"
    • Setting up Kube-Proxy

     Generate the key certificate pair for kube-proxy. Run the Kube-proxy as systemd service. 

    cat << EOF > /etc/systemd/system/kube-proxy.service
    [Unit]Description=Kubernetes Kube Proxy

    [Service]ExecStart=/usr/local/bin/kube-proxy \
      --config=/opt/Kuber/kubernetes/kube-proxy-config.yaml

    [Install]WantedBy=multi-user.target

    ——–Contents of the kube-proxy-config.yaml——-

    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      kubeconfig: "/var/lib/kube-proxy/kubeconfig"
    mode: "iptables"

    Generate the kubeconfig file using the below script

    for user in kube-proxy
    do
    TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
     /opt/Kuber/kubernetes/server/bin/kubectl config set-cluster kubernetes.default
    --certificate-authority=/opt/Kuber/srv/kubernetes/ca.pem --embed-certs=true
    --server=https://xxx.xx.xxx.xxx:6443 --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-credentials system:kube-proxy
    --client-certificate=/opt/Kuber/srv/kubernetes/${user}.pem --client-key=/opt/Kuber/srv/kubernetes/${user}-key.pem
    --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-context kubernetes.default
    --cluster=kubernetes.default --user=system:kube-proxy --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config use-context kubernetes.default
    --kubeconfig=/var/lib/${user}/kubeconfig
    done

     

  5. Configuring the Kubectl client and test the cluster

    To set up Kubectl, unzip kubernetes-client-darwin-amd64.tar.gz and obtain the kubectl binary. 

    Generate the key certificate pair for the administrator user. For admin priveleges make sure that user is part of the system:masters group as shown below

    openssl genrsa -out admin-key.pem 2048
    openssl req -new -key admin-key.pem -out admin.csr -subj "/CN=admin/O=system:masters"
    openssl x509 -req -in admin.csr -CA /opt/Kuber/srv/kubernetes/ca.pem -CAkey /opt/Kuber/srv/kubernetes/ca-key.pem
    -CAcreateserial -out admin.pem -days 7200

     

    Generate the Kubeconfig file using the script below

    for user in admin 
    do
    TOKEN=$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64 | tr -d "=+/" | dd bs=32 count=1 2>/dev/null)
     /opt/Kuber/kubernetes/server/bin/kubectl config set-cluster kubernetes.default
    --certificate-authority=/opt/Kuber/srv/kubernetes/ca.pem --embed-certs=true
    --server=https://xxx.xx.xxx.xxx:6443 --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-credentials ${user}
    --client-certificate=/opt/Kuber/srv/kubernetes/${user}.pem --client-key=/opt/Kuber/srv/kubernetes/${user}-key.pem
    --embed-certs=true --token=$TOKEN --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config set-context kubernetes.default
    --cluster=kubernetes.default --user=${user} --kubeconfig=/var/lib/${user}/kubeconfig
     /opt/Kuber/kubernetes/server/bin/kubectl config use-context kubernetes.default
    --kubeconfig=/var/lib/${user}/kubeconfig
    done

    Test the cluster. 

     cluster1

  6. References

    https://github.com/kelseyhightower/kubernetes-the-hard-way

    https://icicimov.github.io/blog/kubernetes/Kubernetes-cluster-step-by-step/

     

Join The Discussion