Design Overview of Big SQL HA
The Big SQL High Availability ( Big SQL HA ) solution provides high availability for the meta data of the BIGSQL Database ( Big SQL Metastore ) and for the Big SQL Scheduler component. The BIGSQL Database is the database that is used by Big SQL applications to access Hadoop tables.Â High Availability for the Hadoop Name Node and for the Hive catalog is not part of this solution.
The Big SQL HA solution became Generally Available in version 188.8.131.52 of Big SQL and this document relates to that version specifically.
At its core, the Big SQL HA solution uses two key enabling technologies: DB2 High Availability for Disaster Recovery ( DB2 HADR ) and Tivoli System Automation for Multiplatforms ( TSAMP ).
DB2 HADR is used to ship the transaction logs of the main Big SQL database to a live standby system ( The Secondary Head ). The DB2 HADR components on the Secondary Head constantly replays the transaction logs shipped to it from the Primary Head in order to keep a real-time and up-to-date copy of all the catalog information of the Big SQL Database on the Primary Head node.
TSAMP is used to create a complete resource model for the Big SQL resources on the Primary Head and on the Secondary Head.Â This resource model contains resource groups for the Big SQL processes related to the Big SQL database and for the Big SQL Scheduler.Â The processes that are part of these resource groups are constantly monitored by TSAMP.Â TSAMP has the ability to start and stop any of these processes.Â TSAMP ensures that these resources are highly available.Â This means that if a process is killed for any reason it will be restarted automatically.Â If the Primary Head is no longer available for any reason ( a power outage for example ), then the Secondary Head will assume the role of the Primary Head.Â All client applications currently connected to the Primary Head and are set up to take advantage of Automatic Client Reroute ( ACR ) will be automatically re-routed to the Secondary Head.Â This event is called a failover.Â Once a failover takes place, the original Secondary Head will continue to act as the Primary Head and the original Primary Head will act as the Secondary Head.Â If the acting Primary ( original Secondary ) is rebooted for example, the acting Secondary ( original Primary ) will re-assume the role of the Primary Head and the acting Primary ( original Secondary ) will re-assume the role of the Secondary Head.Â This event is called a failback.
The Big SQL Scheduler is running on both the Primary Head and the Secondary head at all times, but only the Scheduler running on the Primary head is actively participating in processing user queries. The other instance of the Big SQL Scheduler running on the Secondary Head is not actively participating in the processing of user queries but is only running in a standby mode.Â When a failover or failback occurs, the Scheduler instance on the new Primary becomes the active instance and the other instance continues running in standby mode.
The worker nodes in the Big SQL cluster are simply processing units for the user workload. In the event of a failover or failback, worker nodes automatically detect the failure of the â€śoldâ€ť primary head node, are able to determine its â€śnewâ€ť location and, automatically re-connect to the â€śnewâ€ť head node all while user applications are failing over.Â Please note that Hadoop data sits on HDFS (or GPFS) and, as such, it is already highly available via native and implicit replication on HDFS.Â As a result, if a worker node goes down, the data can be accessed via any another available worker node that has a replica of the data.
Only the Primary Head can communicate with the worker nodes for processing queries. The Secondary Head is not allowed to communicate with the workers.Â The Secondary Headâ€™s only role is to constantly replay the transaction logs sent to it from the Primary Head and be ready to take over the role of the Primary during a failover.Â When a failover occurs, the worker nodes are informed by the new Primary Head that it is now the acting Primary and communication starts occurring between the new Primary Head and the workers.Â All communication between the old Primary and the worker nodes is terminated (this is to avoid potential â€śsplit-brainâ€ť scenarios).
Before Enabling High Availability (HA)
The user should ensure that the package â€śnet-toolsâ€ť is installed on the primary and secondary nodes as utilities in that package are needed by the enableha process. To check whether this exists simply execute in any terminal on each machine the following command:
If this is not found you can use yum to install this package as root user:
yum install net-tools
Big SQL High Availability (HA) Install Process in Ambari
HA for Big SQL is enabled via the Secondary Head component in Ambari.Â When the component is first installed, HA is in an inactive state.Â To activate it, the user must select the â€śEnableHAâ€ť option in the Big SQL service actions:
In order to enable HA, the Secondary Head component must be located on a dedicated host that doesn’t contain the head node or any worker nodes. If the user needs to enable HA and the Secondary Head component is located on a host that contains a worker node or the Head node, then it must be re-located to be on its own dedicated host.Â To relocate the component to its own host, you can delete it through Ambari’s hosts view (the component must be stopped to do this) and then add it to a different host.
As a part of the process to Enable HA, the user will be logged out of Ambari. When he/she logs back in, a message saying â€śBig SQL HA has been set up successfully on Secondary hostâ€ť will be displayed in the operation output, indicating a successful setup.
The Big SQL HA Functional Interface in Ambari
Just like the head and worker nodes, the Secondary Head node can be started and stopped through Ambari. When you stop the component, you will no longer have high availability for the Big SQL database metastore. When you re-start it, the Secondary Head node will start replicating logs once again and will eventually catch up with any logs missed while it was deactivated.Â The Big SQL database metastore will once again be in HA mode.
Ambari will display a status message for the Secondary head node. If you see â€śStartedâ€ť, that means that Big SQL database metastore is highly available. If you see â€śStoppedâ€ť, it means that your Secondary Head node is not running, or that it is unable to communicate with the active Primary Head node.
In the case of a Primary Head node failure, the Secondary Head node will take over, and become the new â€śactingâ€ť Big SQL metastore. Once the failed node is working again, it will be working as a standby database. This will be reflected when you try to start or stop either head node, with an informational message saying â€śFailover in effect!â€ť
User-Triggered Failover or Failback
The user can trigger a failover or failback for maintenance activities. This can be done via running the following command as the instance user ( bigsql ) :
bigsql-HA-trigger-failover-failback.sh -U <instance-user>
Disabling Big SQL HA
You can disable High Availability for the Big SQL by choosing â€śDisableHAâ€ť in the service actions menu. This will return the Secondary Head component to an inactive state.
If you are in a failover state, and attempt to disable HA, you will see an error message asking you to run the following command:
bigsql-HA-trigger-failover-failback.sh -U <instance-user>
on the active Head Node. This command will trigger failback so that the original Primary Head is the currently active Primary.Â Note that both the Primary Head and the Secondary Head must be running before running the above command.
Upon successful completion of this command then re-run the â€śDisableHAâ€ť option once again.
Example Big SQL HA Usage Scenario
- Install and configure a Big SQL cluster in Ambari
- Enable High Availability
- Ensure that the Secondary Head component is on its own dedicated host.Â A dedicated host means there is no Big SQL Head node or Worker node on this host.
- If it isn’t, delete it from the current host and then add it to a new one
- On the Big SQL page, choose â€śEnableHAâ€ť in the Service Actions menu
- Outage on the primary head node â€“ Secondary Head takes over Primary role.Â Old Primary Head node comes back and begins working as the Secondary Head.
- Disable High Availability
- Choose the â€śDisableHAâ€ť option in Big SQL service actions.
- If still in a failover, you will get an error message with instructions to trigger a failback.
- Follow the instructions to trigger a failback by running the bigsql-HA-trigger-failover-failback.sh script.
- Once failback has completed, re-try disabling HA.
Configuring Clients to Work with High Availability in Big SQL
To configure clients to automatically reroute to the new primary Head node in the case of a failover there are some settings that are recommended on the client side. In this case we are using an IBM JDBC client to connect to our primary Head node. The primary Head node and secondary already configure a value for alternate server that is sent to the client in case of failover. This helps applications that are already connected to try to connect to the new primary in case of a failover while the application is running. However for applications that are establishing a new connection after failover has occurred, client reroute values must be set on the client side.Â We recommend the following data source properties in JDBC on the client side when working with High Availability in Big SQL.
Let’s assume we have this primary head node:
and this secondary head node:
Let’s also assume that the Big SQL Server port is 32051 and that this is the same on both the Primary Head and Secondary head.
We will want to configure the client to try to connect to the secondary head node in case a failover has occurred. We use the following data source property to set this on the client
clientRerouteAlternateServerName = secondnode.ibm.com
We also need to tell the client which port to try to connect to in the event of an automatic client reroute with the following property:
clientRerouteAlternatePortNumber = 32051Â
The client may still throw errors in the event of failover so if you don’t want to see -4498 errors you should set the client to enable a seamless failover.
enableSeamlessFailover = yes
IBM recommends that you use the IBM JDK for all your client connections to Big SQL , however if you have to use the Open JDK for business reasons you will need to also set the following property to force the client to reroute after a set amount of time
blockingReadConnectionTimeout = 1200
You will need to be sure when setting this parameter to align this value to the time your longest-running application SQL statement needs to execute otherwise it will timeout.
Using our example hosts above this is how we would set these values in a jdbc connection URL.