Applications deployed on the cloud must keep running even if there is a hardware failure. This article provides the steps to work around certain challenges faced in a specific scenario where we need to provide high availability (HA) with Windows Server 2012 R2 failover clustering across multiple sites (data centers) on IBM Cloud. We have also provided a technical overview for setting up multi-subnet clustering.
Typical Windows Server clustering across multiple data centers
For HA, you must ensure that all components are duplicated to guarantee that there is no single point of failure. However, if these components are run in the same region, disaster recovery issues can occur if all the hardware in that region becomes unavailable, which might happen due to networking issues. Hence, it is preferred that the failover clusters are provided on different data centers for high availability. The nodes of the cluster at the main site enable access to the clustered service or application, and failover occurs only between these nodes. If there is a disaster where the nodes at the main site are lost, the nodes at the secondary site would start providing service automatically, or with minimal intervention.
A Windows multi-site failover cluster is essentially a group of cluster nodes that are distributed through multiple sites. Each site cluster nodes are connected to local SAN Storage in the same site with replication between the SAN storage from each.
When hosting the server clusters on multiple data centers, an IP address is required for administering the failover cluster. Hence, besides the node IP addresses, we need additional private IP addresses from the same subnet for the cluster.
However, SoftLayer provides only one public or private primary IP address per VM from that subnet. It does not provide additional IP addresses from the same primary subnet. Primary IP addresses are bound to each individual server and cannot be moved. Hence, primary IP address cannot be used for the cluster.
The work around
Since additional primary IP is required for the cluster, we can use portable IP address from SoftLayer as a workaround. Primary subnets are for use by SoftLayer when placing new devices, while portable subnets are for customer use. Simply put, if we want to assign an IP to something ourselves, for example a cluster, we need to have a portable subnet available to us. A SoftLayer portable IP block can be used as VIPs for clustering and HA as they can be quickly and easily moved around.
In this article, we will discuss a sample scenario, where we will set up a Microsoft SQL Server cluster. There are two Windows 2012 R2 servers (Server A and Server B) in two different data centers (Site A and Site B). Site A has the primary AD/DNS server and the secondary AD/DNS server is in Site B.
SoftLayer provides one primary private IP on each cluster server. However, this is not sufficient for setting up the Windows cluster. Hence, we need to order additional portal IP for each server. The subnet range depends upon the number of IPs required. If there are multiple SQL Server instances to be added to the cluster, we would require multiple portable IPs.
The following diagram illustrates this sample scenario, where we are using the subnet range 26.
The following table describes the terms and names used in this article.
|Site A||Datacenter 1|
|Site B||Datacenter 2|
|Server A||Cluster node 1 in Site A|
|Server B||Cluster node 2 in Site B|
|Primary IP||Default private IP assigned by SoftLayer in Ethernet adapter “PrivateNetwork-A”|
|Portable IP||Additional IP range ordered from SoftLayer to use for Cluster on both Server A and Server B|
|AD/DNS1||Active Directory and DNS server in Site A|
|AD/DNS2||Active Directory and DNS server in Site B|
|Mssqlclust||Cluster name used for testing|
|VIP||Virtual IP used for Windows Cluster|
|OR Rule||Dependency rule created in cluster for multisubnet failover|
|RDP||Remote Desktop Protocol|
Creating Windows multi-site multi-subnet cluster
Once the servers are ready with above specifications and after setting up the essential network, we can proceed with cluster preparation. On these specific SoftLayer environment servers, the following custom configurations need to be made before cluster setup:
- Add portable IP on Server A and Server B.
- Disable IPV6 on Server A and Server B.
The portable IP needs to be added to the server as an alias IP on the private network interface (Ethernet adapter PrivateNetwork-A).
Use the following command to assign an alias IP:
On Server A:
netsh int ipv4 add address "Ethernet adapter PrivateNetwork-A" 220.127.116.11/26
(18.104.22.168/26 is portable IP for Server A)
On Server B:
netsh int ipv4 add address "Ethernet adapter PrivateNetwork-A" 22.214.171.124/26
(126.96.36.199/26 is portable IP for Server B)
Ideally, you must disable IPV6 on the cluster nodes before setting up the cluster. IPV6 can be disabled at interface level. Also, to disable IPV6 permanently on all interfaces, make a registry change with the following parameters:
- On the Windows Start menu, click Run.
- Enter regedit
- c. Select HKEY_LOCAL_MACHINE_SYSTEM → CurrentControlSet → Services → TCPIP6 → Parameters.
- Right-click Parameters and add a new DWORD(32-bit) value.
- Name the value as “DisabledComponents.”
- Modify “DisabledComponents” with value FF.
- Reboot the server.
By default, the servers resolve with Primary IP. However, for cluster, we need to resolve the host with Portable IP. To switch this default behavior on servers, execute the following steps to make the Portable IP as the host resolution IP on both nodes.
On Server A:
- RDP to the server with its Portable IP (188.8.131.52).
- On PowerCLI execute the command to remove the Primary IP:
- Now add the Primary IP as the alias IP using command:
- RDP to the server with its Portable IP (184.108.40.206).
- On PowerCLI execute the command to remove the Primary IP:
- Add the Primary IP as the alias IP using the following command:
netsh interface ip delete address name="Ethernet adapter PrivateNetwork-A" addr=192.168.0.3
netsh int ipv4 add address "Ethernet adapter PrivateNetwork-A" addr=192.168.0.3 SkipAsSource=true
On Server B:
netsh interface ip delete address name="Ethernet adapter PrivateNetwork-A" addr=220.127.116.11
netsh int ipv4 add address "Ethernet adapter PrivateNetwork-A" addr=18.104.22.168 SkipAsSource=true
As a prerequisite for cluster the servers must be part of the domain. Server A and Server B must be added to the domain. Also, ensure that the DNS server is updated with the portable IP of both servers for name resolution.
Configuring the cluster
After you complete the custom configurations, you can proceed to setup the Windows clustering between Server A and Server B. You can complete the installation and configuration using PowerCLI commands.
- Install Failover Cluster feature on both nodes. You can use the following PowerCLI command for the feature installation:
- To use the PowerShell FailoverClusters module, you must import it using the command on both nodes:
- Validate hosts before creating cluster (run on Server A):
- After the successful completion of the validation, create the cluster with the following command using the virtual IP (next available portable IP from both sites):
- The final step is to register both VIPs in DNS. Execute the following commands in Server A:
PS C:\> Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
PS C:\> Import-Module FailoverClusters
PS C:\> Test-Cluster -Node ServerA, ServerB
PS C:\> New-Cluster -Name mssqlclust -Node ServerA, ServerB -StaticAddress 22.214.171.124, 126.96.36.199
This command takes care of setting up the dependency rule (OR rule) for mssqlclust to handle multisubnet failover. Among the 2 virtual IPs only one will be online (188.8.131.52 on Server A). In a failover scenario, if Server A fails, the cluster will be online with the second virtual IP (184.108.40.206) on Server B.
PS C:\> Get-ClusterResource "Cluster Name" | set-ClusterParameter RegisterAllProvidersIP 1
PS C:\> Get-ClusterResource "Cluster Name" | set-ClusterParameter HostRecordTTl 300
Manual failover testing
To make sure that the VIPs are moving between Server A and Server B, use the following PowerCLI commands:
PS C:\> Move-ClusterGroup "Cluster Group" -node ServerB
Name OwnerNode State
---- --------- -----
Cluster Group ServerB Online
You can do the same testing by powering off the Server A forcefully. If Server A goes down automatically the Cluster Group should come online on Server B.
This article was co-authored by Prasad H Ganiga, Narayana Nanjundappa and Venkatesh Bhat.