In the event that a Big SQL cluster goes down due to disaster, applications can switch to a secondary cluster where the meta-data and distributed filesystem data is replicated. The IBM Big SQL and Big Replicate solution automates DR for both meta-data and distributed filesystem data. This solution replicates the data files to the DR cluster as well as the Big SQL and Hive meta-data. This blog outlines the steps to setup a Big SQL DR cluster without Big Replicate. Other components such as Hive, HDFS and HBase also need to be considered for DR. This blog will focus on meta-data replication but will give some guidelines for data replication. IBM Big SQL and Big Replicate solution does nicely automate this process for you.
Important ‚Äď This blog assumes that the two clusters must have identical number of nodes and identical Big SQL configuration. If not fix these and then continue.
For DR, online backups of the head node (catalog) is required on the primary cluster (Cluster1) which need to be shipped periodically to the DR cluster (Cluster2) and restored there. The frequency of which the backups are taken and shipped to the DR site is up to the administrator. DR from a cluster with HA to a cluster without HA is possible. In this case the database configuration on the DR site needs to be updated to stop HADR after the restore operation completes. If HA is to be enabled on the DR site this has to be enabled after fail over has occurred.
There is a Big SQL Backup and Restore utility that can be used for Big SQL standalone disaster recovery available from Big SQL 5.0.2. You can use this utility to implement a Disaster Recovery solution of the Big SQL database from the primary (Cluster 1) to the standby cluster (Cluster 2). The backups of other components also need to be included when using this utility, the order of operations include:
1. Create backup directories mkdir /bigsql_backup/backup_offline mkdir /bigsql_backup/backup_online 2. Take an initial offline backup of the head node on Cluster 1 and turn on log archiving: bar.py -m backup --offline --logarchmeth -d /backup/bigsql_dbbackup/backup_offline Note that by default logarchmeth is set to LOGRETAIN. To use a actual log archive path the --logarchmeth parameter can be combined with the -w parameter. 3. Take an online backup of the head node, ship to the DR site and clean up unnecessary logs bar.py -m backup --prune -d /backup/bigsql_dbbackup/backup_online -r 'Head node of Cluster 2' 4. Take a backup of the Hive metastore (section 2.3 below) 5. Restore the backups on the DR head node + rollforward on Cluster 2: bar.py -m restore -d /backup/bigsql_dbbackup/backup_online 6. Restore the Hive metastore (section 4.8 below) 7. Transfer the Hive warehouse directory from Cluster1 to Cluster2 (Section 5 below) 8. Transfer data files to Cluster2 (Section 6 below) 9. Repeat steps 3 to 8 as often as desired based on the predetermined backup frequency 10. Client Re-route/Configure HA on DR site (Section 8 below)
Or if you don’t use this utility you can implement a standalone disaster recovery solution using the steps below:
1.1 Create backup and log archive directories
When taking a backup, first choose a path that is different from the bigsql_db_path or bigsql_data_directories and also is not in the Big SQL home directory (/home/bigsql/). Use the mkdir command to create backup and archive log directories on the head node of the primary cluster the bigsql user id:
mkdir -p /bigsql_backup/backup/offline mkdir -p /bigsql_log/active mkdir -p /bigsql_log/archive
1.2 Update Big SQL Configuration (optional)
The default log path is in /var. Often this path is very small. The log path can be altered as the bigsql database administrator. Using the bigsql id on the head node of Cluster1 and Cluster2, issue the following commands:
db2 ‚Äúupdate db cfg for bigsql using NEWLOGPATH ‚Äė/bigsql_log/active‚Äô‚ÄĚ
1.3 Decide on the recovery window
Determine the frequency of which backups will be shipped to the DR cluster. In the case of an outage of the primary cluster, the timestamp of the last backup image will determine what meta-data is replicated to the DR cluster. For example, if the recovery window chosen is every day, then this means that the DR cluster will always be 1 day behind the primary cluster.
1.4 Initial offline Big SQL catalog backup
After turning on archive logging, an offline backup of the catalog node is required. So schedule this first backup in off peak hours. Issue the following commands which require DB2 administrator (DBADMIN) privileges as the bigsql user from Cluster1‚Äôs Big SQL head node:
db2 ‚Äúupdate db cfg for bigsql member 0 using logarchmeth1 ‚ÄėDISK:/bigsql_log/archive‚Äô‚ÄĚ db2_all ‚Äúdb2 force applications all‚ÄĚ db2 terminate db2 deactivate db bigsql db2 ‚Äúbackup db bigsql on dbpartitionnums (0) to /bigsql_backup/backup/offline‚ÄĚ db2 activate db bigsql
2. Backup the meta-data on the primary site
2.1 Online Big SQL catalog backups
Subsequent Big SQL backups after the initial backup are online. Since the backup is online, there can be inflight transactions while the backup is being taken. For example, there can be tables created while the backup image is being created. The logs for these inflight transactions are stored in the backup image. Use the include logs option to store the logs files in the backup image. On the Big SQL head node on the primary cluster issue the following command to take an online backup:
db2 "backup db bigsql on dbpartitionnums (0) online to /bigsql_backup/backup include logs"
2.2 Prune log files on the primary cluster
By default, Big SQL uses circular logging. This means that after a specified number of log files have been written, log files get overwritten. When archive logging is enabled, the log files are not overwritten. One of the side effects is that these log files can fill up the log path. The prune history command can be used on the primary head node to prune the log files when the delete option is used. Running the prune command is not necessary after each online backup image is taken. It can help remove some unneeded log files that fill up the log path. If the log path has become full then take a new online backup using the command in the previous step and run the prune command with a timestamp prior to the backup image.
Note, if automatic backup and log maintenance is enabled via the following Big SQL db cfg parameter AUTO_DEL_REC_OBJ then the backup image itself can be deleted if the timestamp specified by the prune history command is the same as the timestamp of the backup image. Therefore, as a best practice, choose a timestamp for the prune history command prior to the online backup image. For example, if the backup image is: 20171015161107, then the prune command can be issued with the 2017101516 timestamp.
db2 prune history 2017101516 and delete
If HA is configured on the primary site, automatic pruning of the logs can be enabled from Ambari from versions 4.2.5. If automatic pruning is enabled then there is no need to issue the prune command.
2.3 Hive meta-store backups
More information on Hive Backup and restore can be found in HDP Documentation on backup. For convenience, some commands are given for the default hive mysql database.
First, login to Ambari UI, go to Hive ‚Äď> Config ‚Äď> Advanced, check for the property ‚ÄúDatabase Type‚ÄĚ ‚Äď make sure it is MySQL and get the database name e.g. HIVE. Login via shell to the meta-store host (e.g. styes4.fyre.ibm.com) as root user and issue the following commands:
mkdir /hive_backup mysqldump hive > /hive_backup/hive.sql ls -ltr /hive_backup/hive.sql
3. Ship backups to the DR cluster
3.1 Ship the Big SQL backup image to the DR cluster
Use the mkdir command to create backup directories on the head node of the DR cluster as the bigsql user and use scp to copy the backup image from the catalog node of the primary cluster to the catalog node of the DR cluster as the bigsql user.
From the DR head node: mkdir -p /bigsql_backup/backup From the primary head node: scp /home/bigsql/backup/bigsql_dbbackup/BIGSQL.0.bigsql.DBPART000.20170927071532.001 firstname.lastname@example.org:/home/bigsql/backup/bigsql_backup/BIGSQL.0.bigsql.DBPART000.20170927071532.001
3.2 Ship the Hive backup image to the DR cluster
On cluster2, first, login to Ambari UI, go to Hive ‚Äď> Config ‚Äď> Advanced, check for the property ‚ÄúDatabase Type‚ÄĚ ‚Äď make sure it is MySQL and get the database name e.g. HIVE. Note the hive meta-store host (Cluster2‚Äôs hive meta-store host). Copy the hive meta-store from Cluster1 to Cluster2‚Äôs hive meta-store host in the directory created using the scp command. Login via shell to the hive meta-store host as root user and issue the following commands:
From the hive meta-store host on the DR site: mkdir /hive_backup From the hive meta-store host on the primary site: scp /hive_backup/hive.sql email@example.com:/hive_backup/hive.sql where to bogus4.fyre.ibm.com is the hive meta-store on the DR site
4. Restore the backups on the DR head node
A restore operation needs to be done on the DR site after the new online backups have shipped to the DR site. After a restore operation is done, a rollforward to end of logs is required. After the rollforward operation is completed, restore the hive meta-store backup. Important ‚Äď no active clients should be doing work on the database on the DR site when the restore is in progress.
4.1 Disable automatic maintenance utilities
The administrative task scheduler (ATS) should be disabled prior to doing the restore and the rollforward operations on the DR site. Big SQL uses ATS for automatic statistics collection and automatic syncing of the Big SQL catalog. After turning off ATS, restart the Big SQL service. The database must be deactivated prior to doing the restore as the restore is an offline operation. On the head node as the bigsql user issue the following command and restart the Big SQL service:
db2set DB2_ATS_ENABLE=NO //restart Big SQL service db2_all ‚Äúdb2 force applications all‚ÄĚ db2 terminate db2 deactivate db bigsql
4.2 Cleanup logs from the DR site
The deactivation of the database must be prior to moving the log files (i.e. stick with the instructions laid out in this blog in the exact order). Make sure that the active log directory on the DR head node is empty. Use the following command on the DR cluster‚Äôs head node to determine the active log path/S and then use the mv command to move the logs to alternate pathS.
mkdir -p /home/bigsql/bigsql_log/active_copy mkdir -p /home/bigsql/bigsql_log/archive_copy db2 get db cfg for bigsql | grep -i "Path to log files" Path to log files = /var/ibm/bigsql/database/bigdb/bigsql/NODE0000/SQL00001/LOGSTREAM0000/ db2 get db cfg for bigsql | grep LOGARCHMETH1 First log archive method (LOGARCHMETH1) = DISK:/bigsql_log/archive db2 get db cfg for bigsql | grep -i OVERFLOWLOGPATH Overflow log path (OVERFLOWLOGPATH) = mv /var/ibm/bigsql/database/bigdb/bigsql/NODE0000/SQL00001/LOGSTREAM0000/* /home/bigsql/bigsql_log/active_copy/ mv /bigsql_log/archive/* /home/bigsql/bigsql_log/archive_copy ls -ltr /var/ibm/bigsql/database/bigdb/bigsql/NODE0000/SQL00001/LOGSTREAM0000/ total 0 ls -ltr /bigsql_log/archive/* total 0
Note these logs should not be needed anymore so once the restore and rollforward is complete and there is a successful connection to the database, then these log files can be deleted.
4.3 Create and clean up log target directory
Create a log target directory on the head node of the DR site and prior to each bigsql restore operation ensure that this directory is empty.
mkdir -p /bigsql_log/logtarget
For subsequent restores use the rm -rf command from this directory to remove ALL of the log files in this path.
Failure to remove these log files will result in the following error during the restore operation: SQL2581N Restore is unable to extract log files or restore a log directory from the backup image to the specified path. Reason code “3”.
4.4 Restore the Big SQL catalog
When the online backup is taken, a file with a timestamp is created on the Big SQL head node. Example of a valid timestamp is 20170927071532. Go to the backup directory and determine the timestamp of the latest backup. Restore the Big SQL catalog with this timestamp using the following commands as the bigsql user on the Big SQL head node:
db2 restore db bigsql from /home/bigsql/backup/backup_online_dr taken at 20171020084653 logtarget /home/bigsql/bigsql_log/logtarget replace existing without prompting Messages such as ‚ÄúSQL2540W Restore is successful, however a warning "2539" was encountered during Database Restore while processing in No Interrupt mode‚ÄĚ is normal
4.5 Rollforward to end of logs
A rollforward operation through to the end of the logs is required after the restore is completed. Any client connection attempt will fail with SQL1117N while database is in “rollforward pending‚ÄĚ state. Only after the rollforward command will the database be accessible. Note that since all of the log files were copied over prior to the restore operationIssue the following command as the bigsql user on the Big SQL head node on the DR cluster:
db2 "rollforward database bigsql to end of logs on dbpartitionnum (0) and stop overflow log path ('/home/bigsql/bigsql_log/logtarget')";
4.6 Disable HA on the DR site (if HA configured on the primary site)
When HA is configured on the primary cluster, the HA configuration is stored in the backup image. After restoring the database on the DR site, as a safety precaution, stop HADR if it is enabled and also disable log archiving prior to activating the database using these commands:
db2 stop hadr on database bigsql db2 update db cfg for bigsql using LOGARCHMETH1 OFF
4.7 Re-enable automatic maintenance utilities
Re-enable ATS after the restore is done on the head node of the DR site:
db2set DB2_ATS_ENABLE=YES //restart Big SQL service
4.8 Restore the Hive meta-data
When the Big SQL catalog is restored, the Hive meta-store also should be restored on the DR cluster. This is so that the Big SQL and Hive catalogs are in sync. The next set of steps should be done from the Cluster2‚Äôs hive meta-store host as root user.
Edit the hive.sql file copied over to Cluster2. Since the backup was taken on Cluster1, all hdfs path references to Cluster1 need to be replaced with Cluster2 hdfs paths. Replace ALL the hdfs:// entries for cluster1 (high) with entries for Cluster2(bogus). More information on hive restore can be found in HDP Documentation on restore. For convenience, some commands are given for restoration of the Hive meta-store for the default hive mysql database:
After editing the hive.sql file, one changed entry would be hdfs://high4.fyre.ibm.com:8020/apps/hive/warehouse -> hdfs://bogus4.fyre.ibm.com:8020/apps/hive/warehouse' mysql hive < /hive_backup/hive.sql
5. Transfer the Hive warehouse directory from Cluster1 to Cluster2
Initially copy the hive warehouse directory from Cluster1 to Cluster2. On subsequent passes, transfer the delta only. In Big SQL, multiple schemas can be created. Each schema corresponds to a Hive database. The /apps/hive/warehouse directories should be similar on both clusters. Use the hadoop distcp command to copy the hive warehouse directory from Cluster1 to Cluster2. Note that the target directory does not have ‚Äėwarehouse‚Äô specified because we want to replace the contents of the warehouse directory on Cluster2.
For example: where high is the 1st cluster and bogus is the 2nd cluster hadoop distcp hdfs://high4.fyre.ibm.com:8020/apps/hive/warehouse hdfs://bogus4.fyre.ibm.com:8020/apps/hive/
6. Transfer data files to Cluster2
There are several existing methods to replicate the data to the DR cluster. External tools such as IBM Big Replicate, Falcon can be used. Home grown tools using HDFS snapshots and hadoop distcp tools can also be used. It is left up to the hadoop administrator to determine how the data will be replicated to the DR cluster. One strategy can be to initially copy all the data in the distributed file system to the 2nd cluster and on subsequent passes, copy the delta only. For example, from any machine in cluster1, use the hadoop distcp commands to copy the files from cluster1 to cluster2.
For example where high4 corresponds to cluster1 and bogus4 corresponds to cluster2 hadoop distcp hdfs://high4.fyre.ibm.com:8020/user/bigsql/hadoopds1g/call_center/call_center.dat hdfs://bogus4.fyre.ibm.com:8020/user/bigsql/hadoopds1g/call_center
7. Repeat Steps 2 to 6
After the recovery window has elapsed e.g. after 1 day repeat steps 2 to 6 i.e. backup on head node of primary cluster, ship to head of DR cluster and restore on DR cluster.
8. Client Re-route/Configure HA on DR site
After a disaster has occurred, Automatic Client Re-route can be used to run applications on the new DR site. This could also be done by switching at the DNS level, or using Automatic Client Reroute. Configure HA on the DR site.
Big SQL disaster recovery from one cluster to a DR cluster is discussed in this blog. The DR strategy used here involves taking periodic online backup of the Big SQL catalog and Hive meta-store and restoring them on the DR site. Data replication is can be done using hadoop distcp commands or other tools.
Thanks to Hebert Pererya for technical contributions to this blog