Lab 1: Overview

In this hands-on lab, you’ll learn how to work with key Big SQL, a component of IBM’s big data platform based on Apache Hadoop.  Big SQL is included with several IBM BigInsights offerings.

Big SQL enables IT professionals to create tables and query data in BigInsights using familiar SQL statements. To do so, programmers use standard SQL syntax and, in some cases, SQL extensions created by IBM to make it easy to exploit certain Hadoop-based technologies. Big SQL shares query compiler technology with DB2 (a relational DBMS) and offers a wide breadth of SQL capabilities.

Organizations interested in Big SQL often have considerable SQL skills in-house, as well as a suite of SQL-based business intelligence applications and query/reporting tools. The idea of being able to leverage existing skills and tools — and perhaps reuse portions of existing applications — can be quite appealing to organizations new to Hadoop. Indeed, some companies with large data warehouses built on relational DBMS systems are looking to Hadoop-based platforms as a potential target for offloading “cold” or infrequently used data in a manner that still allows for query access. In other cases, organizations turn to Hadoop to analyze and filter non-traditional data (such as logs, sensor data, social media posts, etc.), ultimately feeding subsets or aggregations of this information to their relational warehouses to extend their view of products, customers, or services.

1.1. What you’ll learn

After completing all exercises in this lab guide, you’ll know how to

  • Inspect the status of your Big SQL service through Apache Ambari, a Web-based management tool included with the IBM Open Platform for Apache Hadoop.
  • Create a connection to your Big SQL server from a command line environment (JSqsh).
  • Execute Big SQL statements and commands.
  • Create Big SQL tables stored in the Hive warehouse and in user-specified directories of your Hadoop Distributed File System (HDFS).
  • Load data into Big SQL tables.
  • Query big data using Big SQL projections, restrictions, joins, and other operations.
  • Store complex data types in Big SQL tables.
  • Gather statistics about your tables and explore data access plans for your queries.
  • Create and execute SQL-based scalar and table user-defined functions.
  • Work with Big SQL web tooling to explore database metrics and perform other tasks.
  • Work with Big SQL using Squirrel SQL Client, an open source SQL tool, to explore your database, execute queries, and chart query results.

Allow 6 – 7 hours to complete all sections of this lab. Separate labs are available on using Big SQL with HBase and using Spark to access Big SQL data.

Special thanks to Uttam Jain, Carlos Renteria, and Raanon Reutlinger for their contributions to earlier versions of this lab.  Thanks also to Nailah Bissoon and Daniel Kikuchi for their reviews and to Kevin Hom for his assistance with publication.

1.2. About your environment

This lab requires a BigInsights environment in which Big SQL is installed and running.  Big SQL is part of BigInsights Quick Start Edition, BigInsights Data Analyst, and BigInsights Data Scientist.

Examples in this lab use are based on a sample multi-node cluster with the configuration shown in the tables below.  If your environment is different, modify the sample code and instructions as needed to match your configuration.

  User Password
Root account root passw0rd
Big SQL Administrator bigsql bigsql
Ambari Administrator admin admin
Knox Gateway account guest guest-password


Property Value
Host name
Ambari port number 8080
Big SQL database name bigsql
Big SQL port number 51000
HBase installation directory /usr/iop/
Big SQL installation directory /usr/ibmpacks/bigsql/4.0
JSqsh installation directory /usr/ibmpacks/common-utils/jsqsh/2.14
Big SQL samples directory /usr/ibmpacks/bigsql/4.0/bigsql/samples/data
BigInsights Home



 2015-08-19 12_21_33-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word About the screen captures, sample code, and environment configurationScreen captures in this lab depict examples and results that may vary from what you see when you complete the exercises.  In addition, some code examples may need to be customized to match your environment.

1.3. Getting started

To get started with the lab exercises, you need to access to a working Big SQL environment and a secure shell (command window).  A free Quick Start Edition is available for download from Hadoop Dev at

2015-08-19 12_23_05-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word

This lab was tested against a native BigInsights installation on a multi-node cluster.  Although the IBM Analytics for Hadoop cloud service on Bluemix includes Big SQL, it does not support SSH access.  As a result, certain examples in this lab — such as those involving JSqsh (the Big SQL command line interface) and DFS commands — cannot be executed using that Bluemix service. Therefore, if you want to follow all examples in this lab, you should install and configure BigInsights on your own cluster.  Consult the instructions in the product’s Knowledge Center (

Before continuing with this lab, verify that Big SQL and all its pre-requisite services are running.

 2015-08-19 12_24_02-Big_SQL4.0_HoL_2015-Aug-17.htm (Last saved by user) - Microsoft Word If have any questions or need help getting your environment up and running, visit Hadoop Dev ( and review the product documentation or post a message to the forum.  You cannot proceed with subsequent lab exercises without access to a working environment.

Join The Discussion

Your email address will not be published. Required fields are marked *