IBM® Cloud Private is an application platform for developing and managing on-premises, containerized applications. It is based on standard Kubernetes architecture and topology with few additional augmentation such as optional vulnaribility assessment node, management node and catalog with IBM Middleware and other products.
It is an integrated environment for managing containers that includes the container orchestrator Kubernetes, a private image registry, a management console, and monitoring frameworks.
IBM Cloud Private delivers a customer-managed container solution for enterprises. It is also available in a community edition, IBM® Cloud Private-CE, which provides a limited offering that is available at no charge and ideal for test environments.
IBM Cloud Private is installed using multiple docker hosts with each host assigned different responsibilities like Master, Worker, Management etc. Master host or node is responsible for managing other nodes in terms of resource allocation, state maintenance, scheduling and monitoring. One can have though a separate docker host with designated management responsibilities such as monitoring, metering, and logging. This is by default handled by Master node.
In this blog I have tried to add Information that I have found during my learning course. The installation is done on machines with local disks meeting the storage pre-requistes of ICP. No SAN mounts have been used.
High Level Architecture
The ICP topology that I have discussed in this blog is one of the basic basic topology and primarily meant to be used for development and testing environments. Fig2. explains the three node cluster topology where in each node is a docker host and where docker host is a physical machine with docker installed over it. Every host constituting an ICP Infrastructure, whether it is associated with management or deploying application workload is a docker host.
Fig1: Docker Host
All the worker nodes that particiapate in the cluster must follow pre-requisites of Kubernetes from networking perspective:
1. All containers should communicate with each other without NAT.
2. All nodes should communicate with all containers without NAT.
3. The IP as seen by one container is the same as seen by the other container (in other words, Kubernetes bars any IP masquerading).
4. Pods can communicate regardless of what Node they sit on
Pause container associated with the POD provides sharable IP address to all other container(s) running inside this POD.
The above rules helps ensure that –
1. Scheduler can deploy POD replicas on any of the node transparently during initial deployment.
2. Horizontal POD autoscaling.
3. Rescheduling Pods to other available node(s) in case of Node failure.
The Kubernetes networking model is based on a flat address space. All pods in a cluster (constituting worker nodes) can directly see each other. Each pod has its own IP address. There is no need to configure any NAT. In addition, containers in the same pod share their pod’s IP address and can communicate with each other through localhost. A running Pod is always scheduled on one node -phyical or virtual. All containers running on same node can communicate with each other through shared file system, IPC or localhost. kube-proxy helps determine the proper pod IP address and port serving each request using virtual IPs and iptables.
Any update to Service triggers an update to iptable from kube-proxy. When a new Service is created, a virtualIP address is chosen and a rule in iptable is set to direct its traffic to kube-proxy via random port. kube-proxy runs on all the nodes participating in ICP cluster, hence one have a cluster-wide resolution for the service virtual ip address.kube-dns records also points to this vip. The iptables rule is only sending traffic to the service entry in kube-proxy Once kube-proxy receives the traffic for a particular service, it must then forward it to a pod in the service’s pool of candidates.
In case of cross node traffic, kube-proxy or iptables simply passes traffic onto the correct pod IP for this service since pod IP addresses are routable. The traffic passes through network overlay connecting these nodes.
ICP uses Calico for communcations between nodes. Calico uses IP-in-IP encapsulation, or tunneling, to flow packets between workloads. At a basic level, what this means is that the IP packets created by the workload are encapsulated by the kernel with another packet that has a different destination IP.On VMWare Infrastructure it supports both Calico and NSX-T.
Calico provides a highly scalable networking and network policy solution for connecting Kubernetes pods based on the same IP networking principles as the internet. Calico can be deployed without encapsulation or overlays to provide high-performance, high-scale data center networking. Calico also provides fine-grained, intent based network security policy for Kubernetes pods via its distributed firewall.
Calico relies on an agent running on each node for the purposes of initializing the Calico network on the node, updating routing information and applying policyThe packet is sent to the IPIP tunnel interface and encapsulated with an IPIP tunnel header.
The Calico agent is packaged up as a container that starts multiple binaries that work together:
Felix: Programs routes and network interfaces on the node for connectivity to and from workloads
BIRD: BGP client that is used to sync routing information between nodes. BIRD picks up routes programmed by Felix, and distributes them via BGP.
confd: Monitors etcd datastore for configuration changes
Fig2: ICP Cluster
IBM Cloud Private uses Calico to manage network traffic. Calico is a scalable network fabric that can provide an IP-in-IP or IP tunneling for inter-workload traffic. This helps network fabric performs source/destination address checks and drops traffic when those addresses are not recognized.
Calico uses BGP to deploy overlays which are primarlily VXLAN. Calico overlay creates a tunnel through which containers residing on different hosts can communicate with one another directly. To check the MTU values for each network interface, run this command from the master node of your cluster:
In Kubernetes, every pod has its own routable IP address. Kubernetes networking with the help of CNI network plug-in like Calico takes care of routing all requests internally between hosts to the appropriate pod. External access is provided through a service, load balancer, or ingress controller, which Kubernetes routes to the appropriate pod.
Installation Architecture and Configuration Settings
To start with installation one should first ensure passwordless login into each of the participating machines from Boot node. In my scenario I have one multihomed machine with a public and private IP adderesses which I have designated the responsibilities for Proxy and Management node (master + mgmt). The rest of the machines are single homed with only private IP addresses which I have decided to give the responsibilities for worker nodes participating as cluster node. To ensure passwordless login one needs to exchange the SSH keys from Boot machine to rest of the machines.
For password-less login first one would need to generate public/private keys as mentioned below:
Once the keys are generated they can be exchanged through below mentioned command:
Copy the SSH keys from boot node to cluster nodes /opt/ibm-cloud-private-3.1.x/cluster/ssh_key
cp ~/.ssh/id_rsa ./cluster/ssh_key
Note: Do manual ssh on each machine using IP address to ensure that passwordless login is working as many times even after above steps it still prompts for key exchange.
Add entries of each hosts in /etc/hosts file on all participating nodes before doing installation.
Note: While the ICP installation process is running, the /etc/hosts file on all of the cluster nodes will be automatically updated to include an entry for clusterName.icp. This correlates to cluster_vip, unless cluster_vip is not set, in which case it will correlate to cluster_lb_address. ICP uses the clustername.icp:8500/xx/xxx in the /etc/hosts file to pull the ICP registry Docker images.
Once the password less login is established download the below metioned softwares from partnerworld support or passport advantage sites:
Run icp-docker-18.03.1_x86_64.bin binary to all the participating hosts which in our scenario are Host1, Host2, Host3 and Host4 respectively. This will augment all the machines with pre-requisite docker environment to help enable ICP 3.1.1 installation.
Now the docker hosts are created, check for the installed docker version to make sure that docker is installed properly.
Essential pre-requisite: Installation of python2 on all the docker hosts. Run below command to install python.
apt install python-minimal
Check for python version on all the participating hosts
ICP makes use of docker based installer for which the Image needs to be loaded in private docker registry on boot node. To complete this step run the below command.
The above command will load private docker registry with all necessary images and dependencies which will help in installing ICP.
Create an installation directory to store the IBM Cloud Private configuration files in and change to that directory. For example, to store the configuration files in /opt/ibm-cloud-private-3.1.1, run the following commands:
Check for the exact name of inception image to install ICP. Run the below command
Extract the configuration files from the installer image. Run the below command from inside /opt/ibm-cloud-private-3.1.1 folder
You will see the below content extracted inside ibm-cloud-private-3.1.1 folder
The key installable files of ICP are – config.yaml, hosts and ssh_keys located inside /opt/ibm-cloud-private-3.1.1/cluster directory where –
· config.yaml: The configuration settings that are used to install IBM Cloud Private to your cluster.
· hosts: The definition of the nodes in your cluster.
· misc/storage_class: A folder that contains the dynamic storage class definitions for your cluster.
· ssh_key: A placeholder file for the SSH private key that is used to communicate with other nodes in the cluster.
· docker-engine: Contains the IBM Cloud Private Docker packages that can be used to install Docker on your cluster nodes.
network_cidr defines calico tunnel IP range. One can check output of tun10 IP address to verify. service_cluster_ip_range defines pods IP range for calico IPAM.
cluster_lb_address is meant for ingress for kubectl commands while proxy_lb_address is meant for application load ingress. If one gives private host IP range for cluster_lb_address then he can run kubectl only on private network of ICP installation.
hosts file is another key file in ICP for installation as it decides on hosts to ICP node – master, proxt etc mapping.
Copy Boot node host SSH keys to /opt/ibm-cloud-private-3.1.1/cluster/ssh_key file
We are now ready for Installation. Run the below command from within /opt/ibm-cloud-private-3.1.1/cluster directory.
Installation logs could be checked in below location
When the Installation is successfully completed one would see the ICP URL with default credentials.
The above configured topology could be confirmed from ICP console:
Uninstalling and Cleaning ICP Setup
There are situations where existing ICP installation has an issue and it needs to re-install ICP by cleaning previous setup. One can use below steps to re-install ICP
docker run -e LICENSE=accept –net=host -t -v “$(pwd)”:/installer/cluster ibmcom/icp-inception-$(uname -m | sed ‘s/x86_64/amd64/g’):3.1.1-ee uninstall
Stop / remove all Docker containers instances
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
To delete all the images
docker system prune -a
*Remove all unused containers, volumes, networks and images
The best option is to run prune command which completely cleans the registery.
Commandline Utilities For ICP
ICP provides multiple command line utilities for the benefit of application development and administration as mentioned below:
One can reach above options through below menu choice:
depending on OS one has option to select executable
Application deployment and Access
Lets explore how an application is deployed using helm chart available within ICP and its vital statistics explored using ICP monitoring tool. lets select Niginx as a candidate application.
Once the above application is deployed, one can check its status by going to Workload > Deployment in menu option.
Click on the hyperlink “my-nginx-ibm-nginx-dev-nginx”
To review Pod details click on “my-nginx-ibm-nginx-dev-nginx-79959b9fcc-bbp68”
Click on containers to get details of all containers running inside this Pod.
All the logs are shown in Kibana dashboard. To check logs for Pod or Container select the elipsis button on extereme right and select view logs.
This will open Kibana dashboard as shown below:
ATo access service details to consume Nginx service goto Network Access > Services
The below service detail shows that Nginx service is exposed on ClusterIP which is accessible to other services only from within cluster. One could not acccess this service from clients outside ICP worker node cluster. To get the service accessible for clients outside cluster we need to use NodePort or Ingress controller. Lets see how one can expose this service through NodePort.
Capture Label details from above page :
Now follow below steps to expose your service at NodePort
Check for the external access by clicking on the hyperlink
The top command display resource (CPU/Memory/Storage) usage of pods.