IBM Support

Performance impact of accessing TIMESTAMP fields from Big SQL with Parquet MR files - Hadoop Dev

Technical Blog Post


Abstract

Performance impact of accessing TIMESTAMP fields from Big SQL with Parquet MR files - Hadoop Dev

Body

For optimal Big SQL performance, Parquet is recommended over other file formats, but when Big SQL accesses TIMESTAMP fields from Parquet MR files there have been some observed performance implications. If you are on RHEL 6 or RHEL 7, used Hive or some other Map Reduce application (including the Big SQL LOAD HADOOP command) to generate Parquet files with TIMESTAMP data and then created Big SQL Hadoop tables using the LOCATION keyword to point to such files then keep reading. You may see high kernel CPU time during query execution when references to these TIMESTAMP fields are made. This blog will describe the issue, give some recommendations on how to avoid this issue and also give some recommendations on what to do to get around the issue.

Description of Parquet MR TIMESTAMP performance issue

Big SQL stores TIMESTAMP values in local time and Hive stores TIMESTAMPs in UTC format. UTC does not change with a change of seasons but local time may change to adjust for daylight savings time. Also time zones are reflected as UTC offsets to facilitate conversions from one time zone to another. Big SQL automatically converts UTC TIMESTAMP to local or system time so that there is no external conversion needed to adjust for daylight savings or timezones.

A Parquet MR file is a Parquet file generated by Map Reduce. There will be a Parquet MR tag in the header of the file to indicate that this file has been written using Map Reduce. For example, when Hive is used to convert a text file to a Parquet file, LOAD HADOOP or Streams is used to generate a Parquet file, because these applications use Map Reduce, there will be a Parquet MR tag in the header of this file.

When a Big SQL table is created with reference to such a file i.e. when the LOCATION clause of the CREATE HADOOP TABLE statement is used to create a Big SQL table, Big SQL recognizes that the data was generated using Map Reduce and will convert the TIMESTAMPs stored in UTC format to local time. This conversion is done each time the TIMESTAMP field is selected or queried from the table or when writing new TIMESTAMP rows to this external file, Big SQL will need to convert the local time to UTC time.

During this conversion to and from UTC we have observed high kernel CPU time. We traced this back to a kernel lock performance issue in glibc. There is a partial fix available in glibc 2.22 but RHEL 7 is still using glibc 2.17.

The first step is to determine whether you are even using Parquet MR files.

Determine whether you are using Parquet MR files

Download the jar in this blog, then use the meta option to determine if there is Parquet MR tag in the header of the file. If you see a parquet-mr tag in the creator field as shown in Example 1 then you are using Parquet MR files.

    Example 1:  yarn jar parquet-tools-1.6.0-IBM-7.jar meta /user/hadoop/parqueth/000000_0  file:        hdfs://hotellnxbi05.torolab.ibm.com:8020/user/hadoop/parqueth/000000_0  creator:     parquet-mr  version 1.6.0    

Approaches to get around the issue

Approach 1

If you are adding Parquet files generated by Map Reduce directly to HDFS and cannot change how you generate these files, then consider avoiding TIMESTAMP fields, this will avoid the conversion that is causing the high kernel CPU usage. Consider using VARCHAR data types instead and use the TIMESTAMP scalar functions to convert these VARCHARs to TIMESTAMPs at the application level when needed. The easiest approach that can be taken here is to create a new Big SQL table, change all TIMESTAMPs to VARCHAR(29) and use the LOCATION keyword to point to the same Parquet MR files.

Approach 2

If you need to use TIMESTAMP columns then consider using Big SQL to generate Parquet files from text files instead of using Hive to generate the Parquet files using INSERT…SELECT. Create a new Big SQL table on a new HDFS location and then use Big SQL INSERT…SELECT to generate the Parquet file. This avoids the problematic Parquet MR header in the file. Example 2 shows the steps that could be taken when generating Parquet files from text files using Hive. Note that using LOAD HADOOP will still generate the problematic Parquet MR in the header of the file because LOAD uses Map Reduce.

    Example 2:  hive> describe BText;  OK  trans_id                int  product                 varchar(50)  trans_date              timestamp  Time taken: 0.381 seconds, Fetched: 3 row(s)  hadoop fs -mkdir '/user/hadoop/parqueth'  hive>      > CREATE TABLE HParquet (      > trans_id int, product varchar(50), trans_date timestamp      > )      > ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS PARQUETFILE      > LOCATION '/user/hadoop/parqueth';  OK  Time taken: 0.288 seconds  hive> INSERT INTO HPARQUET SELECT * FROM BTEXT;  Query ID = nbissoon_20160808130333_beed4a57-0f74-46b2-a166-1dd6af51052b  Total jobs = 1  Launching Job 1 out of 1  Tez session was closed. Reopening...  Session re-established.      Status: Running (Executing on YARN cluster with App id application_1465407138775_0590)    --------------------------------------------------------------------------------          VERTICES      STATUS  TOTAL  COMPLETED  RUNNING  PENDING  FAILED  KILLED  --------------------------------------------------------------------------------  Map 1 ..........   SUCCEEDED      1          1        0        0       0       0  --------------------------------------------------------------------------------  VERTICES: 01/01  [==========================>>] 100%  ELAPSED TIME: 8.28 s  --------------------------------------------------------------------------------  Loading data to table nbissoon.hparquet  Table nbissoon.hparquet stats: [numFiles=1, numRows=6, totalSize=633, rawDataSize=18]  OK  Time taken: 22.112 seconds    hadoop fs -ls '/user/hadoop/parqueth'  Found 1 items  -rwxr-xr-x   3 nbissoon hdfs        633 2016-08-08 13:03 /user/hadoop/parqueth/000000_0    

The meta-data for this Parquet file is shown in Example 1, note that the creator is ‘Parquet MR’.
Example 3 shows the steps that could be taken from Big SQL to generate the Parquet file, in this case the creator of the parquet file will not be ‘Parquet MR’.

    Example 3  CREATE HADOOP TABLE BParquet ( trans_id int, product varchar(50), trans_date timestamp ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS PARQUETFILE LOCATION '/user/hadoop/parquetb'  DB20000I  The SQL command completed successfully.    INSERT INTO BPARQUET SELECT * FROM BTEXT  DB20000I  The SQL command completed successfully.    hadoop fs -ls '/user/hadoop/parquetb'  Found 2 items  -rw-r--r--   3 nbissoon hdfs          0 2016-08-08 12:58 /user/hadoop/parquetb/_SUCCESS  -rw-r--r--   3 nbissoon hdfs        695 2016-08-08 12:58 /user/hadoop/parquetb/i_n_65536_108621383_data.0.parq    

Approach 3

If you already used Hive to create the Parquet file then consider using Big SQL to create a new Parquet file from the existing Parquet MR file using INSERT…SELECT (note that LOAD HADOOP will still generate the Parquet MR in the header of the files). The steps shown in Example 4 are very similar to Example 3 but because the table was created in Hive there are a few extra steps needed to sync the Big SQL and Hive catalogs so that the table can be accessed from Big SQL. After this step you can validate that there is no Parquet MR in the header of the file using the steps in Example 1.

    Example 4:  CREATE HADOOP TABLE BParquetH ( trans_id int, product varchar(50), trans_date timestamp ) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS PARQUETFILE LOCATION '/user/hadoop/parquetbh'  DB20000I  The SQL command completed successfully.    INSERT INTO BPARQUETH SELECT * FROM HPARQUET  DB21034E  The command was processed as an SQL statement because it was not a  valid Command Line Processor command.  During SQL processing it returned:  SQL0204N  "NBISSOON.HPARQUET" is an undefined name.  SQLSTATE=42704    "call SYSHADOOP.HCAT_SYNC_OBJECTS('NBISSOON','HPARQUET','a','REPLACE','CONTINUE')";      Result set 1    --------------      OBJSCHEMA OBJNAME  OBJATTRIB TYPE STATUS DETAILS    --------- -------- --------- ---- ------ -------    NBISSOON  HPARQUET -         T    OK     -      1 record(s) selected.      Return Status = 0    describe table HPARQUET                                  Data type                     Column  Column name                     schema    Data type name      Length     Scale Nulls  ------------------------------- --------- ------------------- ---------- ----- ------  TRANS_ID                        SYSIBM    INTEGER                      4     0 Yes  PRODUCT                         SYSIBM    VARCHAR                     50     0 Yes  TRANS_DATE                      SYSIBM    TIMESTAMP                   12     9 Yes      3 record(s) selected.    INSERT INTO BPARQUETH SELECT * FROM HPARQUET  DB20000I  The SQL command completed successfully.    hadoop fs -ls '/user/hadoop/parquetbh'  Found 2 items  -rw-r--r--   3 nbissoon hdfs          0 2016-08-08 13:21 /user/hadoop/parquetbh/_SUCCESS  -rw-r--r--   3 nbissoon hdfs        695 2016-08-08 13:21 /user/hadoop/parquetbh/i_n_65536_433315127_data.0.parq  

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSCRJT","label":"IBM Db2 Big SQL"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

UID

ibm16260047