Overview

Skill Level: Any Skill Level

Docker and Kubernetes fundamentals and networking understanding

This article discusses implicit details involved in ICP Installation, architecture and basic administration. It takes a basic use case with shared boot, proxy, master and management node with separate worker nodes.

Step-by-step

  1. Introduction

    IBM¬ģ Cloud Private is an application platform for developing and managing on-premises, containerized applications. It is based on standard Kubernetes architecture and topology with few additional augmentation such as optional vulnaribility assessment node, management node and catalog with IBM Middleware and other products.

    It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image registry, a management console, and monitoring frameworks.

    IBM Cloud Private delivers a customer-managed container solution for enterprises. It is also available in a community edition, IBM¬ģ Cloud Private-CE, which provides a limited offering that is available at no charge and ideal for test environments.

    IBM Cloud Private is installed using multiple docker hosts with each host assigned different responsibilities like Master, Worker, Management etc. Master host or node is responsible for managing other nodes in terms of resource allocation, state maintenance, scheduling and monitoring. One can have though a separate docker host with designated management responsibilities such as monitoring, metering, and logging. This is by default handled by Master node.

    In this blog I have tried to add Information that I have found during my learning course. The installation is done on machines with local disks meeting the storage pre-requistes of ICP. No SAN mounts have been used.

     

  2. High Level Architecture

    The ICP topology that I have discussed in this blog is one of the basic basic topology and primarily meant to be used for development and testing environments. Fig2. explains the three node cluster topology where in each node is a docker host and where docker host is a physical machine with docker installed over it. Every host constituting an ICP Infrastructure, whether it is associated with management or deploying application workload is a docker host.

     

    Docker_Host

    Fig1: Docker Host

     

    All the worker nodes that particiapate in the cluster must follow pre-requisites of Kubernetes from networking perspective:

    1. All containers should communicate with each other without NAT.
    2. All nodes should communicate with all containers without NAT.
    3. The IP as seen by one container is the same as seen by the other container (in other words, Kubernetes bars any IP masquerading).
    4. Pods can communicate regardless of what Node they sit on

    POD_ICP

    Pause container associated with the POD provides sharable IP address to all other container(s) running inside this POD.

    The above rules helps ensure that –

    1. Scheduler can deploy POD replicas on any of the node transparently during initial deployment.

    2. Horizontal POD autoscaling.

    3. Rescheduling Pods to other available node(s) in case of Node failure.

     

    The Kubernetes networking model is based on a flat address space. All pods in a cluster (constituting worker nodes) can directly see each other. Each pod has its own IP address. There is no need to configure any NAT. In addition, containers in the same pod share their pod’s IP address and can communicate with each other through localhost. A running Pod is always scheduled on one node -phyical or virtual. All containers running on same node can communicate with each other through shared file system, IPC or localhost. kube-proxy helps determine the proper pod IP address and port serving each request using virtual IPs and iptables.

    Any update to Service triggers an update to iptable from kube-proxy. When a new Service is created, a virtualIP address is chosen and a rule in iptable is set to direct its traffic to kube-proxy via random port. kube-proxy runs on all the nodes participating in ICP cluster, hence one have a cluster-wide resolution for the service virtual ip address.kube-dns records also points to this vip. The iptables rule is only sending traffic to the service entry in kube-proxy Once kube-proxy receives the traffic for a particular service, it must then forward it to a pod in the service’s pool of candidates.

    In case of cross node traffic, kube-proxy or iptables simply passes traffic onto the correct pod IP for this service since pod IP addresses are routable. The traffic passes through network overlay connecting these nodes.

    ICP uses Calico for communcations between nodes. Calico uses IP-in-IP encapsulation, or tunneling, to flow packets between workloads. At a basic level, what this means is that the IP packets created by the workload are encapsulated by the kernel with another packet that has a different destination IP.On VMWare Infrastructure it supports both Calico and NSX-T.

    ICP_App_Arch-1

    Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.

    Calico relies on an agent running on each node for the purposes of initializing the Calico network on the node, updating routing information and applying policyThe packet is sent to the IPIP tunnel interface and encapsulated with an IPIP tunnel header.

    The Calico agent is packaged up as a container that starts multiple binaries that work together:

    Felix: Programs routes and network interfaces on the node for connectivity to and from workloads
    BIRD: BGP client that is used to sync routing information between nodes. BIRD picks up routes programmed by Felix, and distributes them via BGP.
    confd: Monitors etcd datastore for configuration changes

                                             IPIPTunnel

    Fig2: ICP Cluster

    IBM Cloud Private uses Calico to manage network traffic. Calico is a scalable network fabric that can provide an IP-in-IP or IP tunneling for inter-workload traffic. This helps network fabric performs source/destination address checks and drops traffic when those addresses are not recognized.

    Calico uses BGP to deploy overlays which are primarlily VXLAN. Calico overlay creates a tunnel through which containers residing on different hosts can communicate with one another directly. To check the MTU values for each network interface, run this command from the master node of your cluster:

    SC_MTU

    In Kubernetes, every pod has its own routable IP address. Kubernetes networking with the help of CNI network plug-in like Calico takes care of routing all requests internally between hosts to the appropriate pod. External access is provided through a service, load balancer, or ingress controller, which Kubernetes routes to the appropriate pod.

     Calico_Tun

  3. Installation Architecture and Configuration Settings

    To start with installation one should first ensure passwordless login into each of the participating machines from Boot node. In my scenario I have one multihomed machine with a public and private IP adderesses which I have designated the responsibilities for Proxy and Management node (master + mgmt). The rest of the machines are single homed with only private IP addresses which I have decided to give the responsibilities for worker nodes participating as cluster node. To ensure passwordless login one needs to exchange the SSH keys from Boot machine to rest of the machines.

    Installation_Step1

    For password-less login first one would need to generate public/private keys as mentioned below:

    SSH_KeyGen

    Once the keys are generated they can be exchanged through below mentioned command:

    SSH_KeyExchg

    example:

    SSH_KeyExchgExam

     

    Copy the SSH keys from boot node to cluster nodes /opt/ibm-cloud-private-3.1.x/cluster/ssh_key 

    cp ~/.ssh/id_rsa ./cluster/ssh_key

     

    Note: Do manual ssh on each machine using IP address to ensure that passwordless login is working as many times even after above steps it still prompts for key exchange.

    Add entries of each hosts in /etc/hosts file on all participating nodes before doing installation.

    hosts_mapping

    example:

    hosts_mapping_ex

    Note: While the ICP installation process is running, the /etc/hosts file on all of the cluster nodes will be automatically updated to include an entry for clusterName.icp. This correlates to cluster_vip, unless cluster_vip is not set, in which case it will correlate to cluster_lb_address. ICP uses the clustername.icp:8500/xx/xxx in the /etc/hosts file to pull the ICP registry Docker images.

    Once the password less login is established download the below metioned softwares from partnerworld support or passport advantage sites:

    a) ibm-cloud-private-x86_64-3.1.1.tar.gz

    b) icp-docker-18.03.1_x86_64.bin

    Installer_Arch

     

    Run icp-docker-18.03.1_x86_64.bin binary to all the participating hosts which in our scenario are Host1, Host2, Host3 and Host4 respectively. This will augment all the machines with pre-requisite docker environment to help enable ICP 3.1.1 installation.

    Docker_Install

    Now the docker hosts are created, check for the installed docker version to make sure that docker is installed properly.

    Docker_version

    Essential pre-requisite: Installation of python2 on all the docker hosts. Run below command to install python.

    *********************

    apt install python-minimal

    *******************

    Check for python version on all the participating hosts

    Python_version

    ICP makes use of docker based installer for which the Image needs to be loaded in private docker registry on boot node. To complete this step run the below command.

    Docker_Installer

    example:

    Docker_Installer1

    The above command will load private docker registry with all necessary images and dependencies which will help in installing ICP.

    Images_Coubt

    Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.1, run the following commands:

    createDir

    Check for the exact name of inception image to install ICP. Run the below command

    Inception

    Extract the configuration files from the installer image. Run the below command from inside /opt/ibm-cloud-private-3.1.1 folder

    InceptionCmdRun

    You will see the below content extracted inside ibm-cloud-private-3.1.1 folder

    ExtractedContent

    The key installable files of ICP are – config.yaml, hosts and ssh_keys located inside /opt/ibm-cloud-private-3.1.1/cluster directory where –

    ·         config.yaml: The configuration settings that are used to install IBM Cloud Private to your cluster.

    ·         hosts: The definition of the nodes in your cluster.

    ·         misc/storage_class: A folder that contains the dynamic storage class definitions for your cluster.

    ·         ssh_key: A placeholder file for the SSH private key that is used to communicate with other nodes in the cluster.

    ·         docker-engine: Contains the IBM Cloud Private Docker packages that can be used to install Docker on your cluster nodes.

    Config.yaml

    Cfg1

    network_cidr defines calico tunnel IP range. One can check output of tun10 IP address to verify. service_cluster_ip_range defines pods IP range for calico IPAM.

    Cfg2

    Cfg3

    cluster_lb_address is meant for ingress for kubectl commands while proxy_lb_address is  meant for application load ingress. If one gives private host IP range for cluster_lb_address then he can run kubectl only on private network of ICP installation.

    hosts file is another key file in ICP for installation as it decides on hosts to ICP node – master, proxt etc mapping.

    hosts_file-1

    Copy Boot node host SSH keys to /opt/ibm-cloud-private-3.1.1/cluster/ssh_key file

    SSH_Key_Internal_Cp

    SSH_Key_Internal_Ex

    We are now ready for Installation. Run the below command from within /opt/ibm-cloud-private-3.1.1/cluster directory.

    Installation

    Installation logs could be checked in below location

    InstallationLogs

     

    When the Installation is successfully completed one would see the ICP URL with default credentials.

    Install_Complete

     

     The above configured topology could be confirmed from ICP console:

    ICP_Topology_Det

  4. Uninstalling and Cleaning ICP Setup

    There are situations where existing ICP installation has an issue and it needs to re-install ICP by cleaning previous setup. One can use below steps to re-install ICP

    Step1:

    docker run -e LICENSE=accept –net=host -t -v “$(pwd)”:/installer/cluster ibmcom/icp-inception-$(uname -m | sed ‘s/x86_64/amd64/g’):3.1.1-ee uninstall

    Stop / remove all Docker containers instances

    docker stop $(docker ps -a -q)
    docker rm $(docker ps -a -q)

    To delete all the images

    docker system prune -a

    *Remove all unused containers, volumes, networks and images

    The best option is to run prune command which completely cleans the registery.

     

  5. Commandline Utilities For ICP

    ICP provides multiple command line utilities for the benefit of application development and administration as mentioned below:

     

    ICP_CLI._Options

    One can reach above options through below menu choice:

    ICP_CLI

     

    depending on OS one has option to select executable

    ICP_CLI._OSSelections

     

     

  6. Application deployment and Access

    Lets explore how an application is deployed using helm chart available within ICP and its vital statistics explored using ICP monitoring tool. lets select Niginx as a candidate application.

     SC1

     Once the above application is deployed, one can check its status by going to Workload > Deployment in menu option.

    SC2

    Click on the hyperlink “my-nginx-ibm-nginx-dev-nginx”

    SC3

     

     

    To review Pod details click on “my-nginx-ibm-nginx-dev-nginx-79959b9fcc-bbp68”

    SC4

     

    Click on containers to get details of all containers running inside this Pod.

    SC5

     

    All the logs are shown in Kibana dashboard. To check logs for Pod or Container select the elipsis button on extereme right and select view logs.

    SC6

    This will open Kibana dashboard as shown below:

    SC7

    ATo access service details to consume Nginx service goto Network Access > Services

     

     

    Service1

    The below service detail shows that Nginx service is exposed on ClusterIP which is accessible to other services only from within cluster. One could not acccess this service from clients outside ICP worker node cluster. To get the service accessible for clients outside cluster we need to use NodePort or Ingress controller. Lets see how one can expose this service through NodePort.

    CaptureLabelsAndClusterIP

     

    Capture Label details from above page :

    app=ibm-nginx-dev,
    chart=ibm-nginx-dev-1.0.1,
    heritage=Tiller,
    release=nginx-sh

    Now follow below steps to expose your service at NodePort

    FillInDetails_1

     

    FillInDetails_2

    FillInDetails_3

    FillInDetails_4

    FillInDetails_5

     

    kubectlgetservices

     

    Check for the external access by clicking on the hyperlink

    FillInDetails_6

  7. Using Heapster

    The top command display resource (CPU/Memory/Storage) usage of pods.

     

    heapster1

     

    heapster2

     

     

  8. References

    a) https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example

    b) https://www.nginx.com/blog/nginx-ingress-controller-ibm-cloud-private/

    c) https://console.bluemix.net/docs/containers/cs_uc_health.html#cs_uc_health

    e) https://neuvector.com/network-security/kubernetes-networking/

    f) https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/troubleshoot/etcd_fails.html

     

     

Join The Discussion