Lab 2: Exploring your Big SQL service through Ambari

Administrators can monitor, launch, and inspect aspects of their Big SQL service through Ambari, an open source Web-based tool for managing Hadoop clusters.  In this section, you will learn how to

  • Launch Ambari.
  • Inspect the overall status of your cluster.
  • Inspect the configuration and status of your Big SQL service.
  • Identify the software repository (build source) of your Big SQL service.

Allow 30 minutes to complete this section. To work through these exercises, you must have a working BigInsights environment, as described in the initial lab in this series.
Please post questions or comments about this lab to the forum on Hadoop Dev at

2.1. Inspecting your cluster status

In this exercise, you will launch Apache Ambari and verify that a minimal set of services are running so that you can begin working with Big SQL. You will also learn how to stop and start a service.
__1.          Launch a Web browser.
__2.          Enter the URL for your Ambari service, which was configured at installation.  For example, if the host running your Ambari server is and the server was installed at its default port of 8080, you would enter
__3.          When prompted, enter the Ambari administrator ID and password.  (By default, this is admin/admin).
2015-08-19 13_01_41-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word

 2015-08-19 13_01_02-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word If your Web browser returns an error instead of a sign in screen, verify that the Ambari server has been started and that you have the correct URL for it.  If needed, launch Ambari manually:  log into the node containing the Ambari server as root and issue this command:  ambari-server start


__4.          Verify that the Ambari console appears similar to this:
2015-08-19 13_02_07-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__5.          If necessary, click the Dashboard tab at the top of the screen and inspect the overall status of services that have been installed.  The previous screen capture was taken from a system in which all open source components provided in the IBM Open Platform for Apache Hadoop had been started.  The Big SQL service was also started.
__6.          Click on the Flume service in the panel at left.
2015-08-19 13_02_50-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__7.          Note that detailed information about the selected service appears.
2015-08-19 13_03_11-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__8. ¬†¬†¬†¬†¬†¬†¬†¬† Since Flume isn’t required for this lab, stop the service.¬† To do so, expand the Services Action drop-down menu in the upper right corner and select Stop.
2015-08-19 13_03_39-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__9.          When prompted, click Confirm Stop.
2015-08-19 13_04_18-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__10. ¬†¬†¬†¬†¬† Monitor the operation’s status in the pop-window that appears.
2015-08-19 13_04_41-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__11.       When the operation completes, click the OK button at the bottom of the pop-window to close it.
__12.       Return to the Ambari display and verify that the Flume service has stopped, as indicated by the red triangle next to the service.
2015-08-19 13_05_08-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__13.       Optionally, start the Flume service.  From the Services Action menu, select Start.  Click OK when prompted and allow the process to complete.  Click OK again.
__14.       Confirm that the following services are operational on your cluster before continuing with the remaining exercises:  HDFS, MapReduce2, Nagios, Ganglia, Hive, Knox, BigInsights РBig SQL.

2.2.          Exploring your Big SQL service

Let’s explore the configuration of your Big SQL service, including the nodes on which various Big SQL artifacts have been installed.¬† This is important for subsequent exercises, as you need to know where Big SQL client software resides and where the Big SQL Head Node resides.¬† Big SQL client software includes JSqsh, a command-line interface that you’ll use in a subsequent lesson, and SFTP, which you’ll use later to LOAD data from a remote server into a Big SQL table.¬† To connect to Big SQL and issue commands or queries, you need to specify the JDBC URL of the Big SQL Head Node.
__1.          In Ambari, click on the BigInsights РBig SQL service to display details about it.
2015-08-19 13_36_21-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__2. ¬†¬†¬†¬†¬†¬†¬†¬† Examine the overview information presented in the Summary tab.¬† In the previous screen capture, you’ll note that there are two Big SQL workers installed and running.¬† Furthermore, JSqsh and SFTP are installed on 3 nodes.
__3.          Click on the Hosts tab towards the upper right of your screen.  A summary of the nodes in your cluster is displayed.  The image below was taken from a 4-node cluster.
2015-08-19 13_37_53-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__4.          Expand the components information for each node to inspect the installed services.  In particular, note which node contains the Big SQL Head node, which node(s) contain the Big SQL Workers, and which node(s) contain the Big SQL clients service (JSqsh).
2015-08-19 13_40_21-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word
__5.          Optionally, explore the software repository associated with your Big SQL installation.  In the upper right corner, click Admin > Repositories. Scroll to the bottom of the displayed page and note the repositories associated with the software you have installed on your cluster.  This information indicates the build levels for your core services (IBM Open Platform for Apache Hadoop), core utilities, and IBM value packs.
2015-08-19 13_41_00-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word

Join The Discussion

Your email address will not be published. Required fields are marked *