Overview

Skill Level: Any Skill Level

This article is in continuity for previous two articles on Installation and Networking, The focus of this article is high availability of master node.

Step-by-step

  1. Introduction

    This article focusses on high availability aspect of master nodes in IBM Cloud Private. This article is in continuation of previous two articles on Installation and Networking. High availability plays a key role while setting up production infrastructure as all the components of ICP should not be subjected to single point of failure. There are multiple components of ICP with different roles and responsibilities –

    a) Administration

    b) Application Workload

    c) Application Routing

    d) Vulnerability Assessment

    where Administration is handled through Master node, Application Workload is handled through Worker nodes, Application Routing through Proxy node and Vulnaribility Assessment through Vulnaribility advisor node. Master node has a key role to play as all the major kubernetes services runs in this node. All major services like API Server, Contoller manager etc runs on master node. All the command line administration requests fired through kubectl passes through the master node. Hence master is one of the key in administration of whole kubernetes cluster.

    To configure high availability in master node one needs to meet the below requirements as per ICP Information Center guidelines‚Äď

    a) Configure POSIX compliant shared storage external to IBM Cloud Private cluster. /var/lib/registry and /var/lib/icp/audit should be mounted on this shared storage. You must set the file parameter to 0755 for each directory.

    b)  Assign the cluster_vip address to an available master node to let it act as leading master node. The cluster_vip IP address must be on the NIC that you specify in the vip_iface parameter.

    The below use case is tested on Ubuntu 18.04 Operating System.

  2. Setting up the NFS Server environment

    The first step in configuring high availability for master node is to have an external NFS server. By external it mean external to kubernetes cluster. In most of the production scenarios the NFS Server is already installed in customer data center Infrastructure or is being used as managed service on cloud.

    In this article the NFS server is manually installed by dedicating a physical machine for NFS responsibility. Once the NFS Server is up and running, create a folder to work as NFS share to client machines. One of the example of which is given below.

    SC1-1

    On similar lines create /mnt/icp/registry and /mnt/icp/auditlogs folders as shown below and restart nfs server.

    SC4-1

    Add the client machine IP addresses to map the shares with clients, which in our case are master nodes in /etc/exports file.

    NFS_Server_Exports

    After making all the above configurations in the host system, now is the time to export the shared directory through the following command:

    exportfs -a

    Finally, in order to make all the configurations take effect, restart the NFS Kernel server. This step is not needed on cloud or for managed NFS servers in customer data centers.

    systemctl restart nfs-kernel-server

    On client machine install nfs-client packages on all the machines.

    nfsclientpackage

    Create folder on client i.e master nodes where nfs server mount point would be mounted. As per ICP guidelines in Information Center

    https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/high_availability.html

    these should be /var/lib/registry and /var/lib/icp/audit respectively. Create these folders and mount them with respective NFS server directories as shown below:

    SC7

     

    Verify that these directories are properly mouted:

    SC71

    If you fail to do mounting before starting installation you will get below exception:

    mounttingErr

     

    We are now done with NFS configurations with the below archtecture:

    ArchDiag2

     

    where shared folders in diagram refers to : /var/lib/registry and /var/lib/icp/audit. Now test functioning of NFS share before we proceed to install ICP with master HA. To test create a text file the mounted folder on any client and this text file should appear in all nodes including NFS server.

  3. Networking in Master Node High Availability

    The cluster_vip address is used by virtual IP manager who gives responsibility to specific master out of the pool of masters by assigning this cluster_vip to it. The virtual IP manager monitors the health of the cluster’s master and proxy nodes. If the leading master or proxy node is no longer available, the virtual IP manager selects an available node and assigns it to the correct virtual IP address.

    While IBM Cloud Private manages high availability through the virtual IP manager, one can also use an external load balancer as another option to distribute the load of the master and proxy nodes and facilitate external communication. To use a load balancer, during installation, specify its IP address as the cluster_lb_address and proxy_lb_address parameter values in the config.yaml file.

    For a high availability environment, you must set up at least one of the following parameters: cluster_vip, cluster_lb_address.

    ArchDiag3

     

    Ucarp service allows to have a pair of hosts to take over a single virtual IP address. The idea behind ucarp is that a small number of hosts will have their own IP address, but each of them is potentially able to grab a “floating” or “virtual” IP address. This address is the one that is then highly-available. Ucarp in brief helps to configure an IP address which would be live on only one host at the same time. Having two or more hosts and moving one address between them is like to survive outage of a single host which is making system highly available but not scalable.

    The host which isn’t the owner of the floating IP will not receive any traffic and will be sat idle. This is a kind of active-passive scenario.

    As shown in article on Installation three ICP files that one needs to modify to setup master high availability are – config.yaml, hosts and ssh_key with the below details:

     

    master_ip_addresses

    Config.yaml

     https://github.com/sharadc2001/icp/blob/master/config.yaml

    In config.yaml file configure cluster_vip ip address as per rules given in ICP Information Center:

    https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/custom_install.html#HA

    For the vip_iface parameter value, provide your environment’s interface name. For the cluster_vip parameter value, provide an available IP address, preferably one from the same IP range that your cluster nodes use. For the master nodes, the virtual IP has to be in the same subnet. In this case below the cluster_vip is from private IP range from my network CIDR. One can confirm his vip_iface through ifconfig or ip a command.

     

    ens_find

     

    and then put it in config.yaml file as shown below:

    virtual_ip_addresses

    Note: If one decides to take cluster_vip from private network then it should be non routable.

    As you can see below the request for ICP console is routed through cluster_vip address 10.41.13.59.

     

    ICP_Console_New-1

     

     

    Below screen shot shows three active master nodes proxyied through cluster_vip address.

     Master_Nodes_Console

     

    As seen above all the master nodes are active and reachanble directly as well as through cluster_vip which acts as a proxy for for master nodes cluster. From commandline perspective when using kubectl one should use master’s direct IP address.

  4. Testing Master Node High Availability

    The final architecture that we implemented is as below:

     

    FinalNetworkDiagram

    To test the scenario shut down virtual machine or physical instance one by one and check whether you are able to access ICP console through virtual IP.

  5. References

    a) https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.0/installing/custom_install.html#HA

    b) https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.0/installing/custom_install.html#HA

    c) https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/installing/install_containers.html#deploy

    d) https://www.ibm.com/support/knowledgecenter/SSBS6K_3.1.1/installing/custom_install.html#HA

    e) https://vitux.com/install-nfs-server-and-client-on-ubuntu/

     

Join The Discussion