The objective of this series is to explore performance for BigSQL Federation using practical examples.

This series is divided into two parts.

  • Part 1 will look at function performance with emphasis on pushdown analysis
  • Part 2 will look at performance for queries that join nicknames with local tables and/or other nicknames

You are reading Part 1.

Before you begin

All the commands and plans shown were generated using jsqsh as a command line interface on an environment running HDP2.6.2, and BigSQLv5.0.3.
IP addresses and hostnames were masked using substitution.
We will not go look at setting up federation, but the DDL used is shown. For information on how to setup a particular data source please read the “Federation > Configure data Sources” section of the knowledge center or this article might be helpful.


Following is a brief reminder of some terminology you should already be familiar with:

  • Nickname An entry in the local catalog representing a remote table
  • Remote table A table stored at a remote instance
  • Server An object in the local catalog representing a remote instance
  • Wrapper An object in the local catalog holding information required to connect to a type of DBMS
    • Federation setup

      In the examples for this section we are using a PostgreSQL v 9.2 server as our data source.

      [][bigsql] 1> create server post_1 type postgresql version 9.2 options (host '', port '5432', dbname 'feddb');
      0 rows affected (total: 0.010s)
      [][bigsql] 1> create user mapping for bigsql server post_1 options (remote_authid 'feduser', remote_password 'password');
      0 rows affected (total: 0.010s)
      [][bigsql] 1> create nickname post_low_var for post_1."low_var";
      0 rows affected (total: 0.055s)

      Notice the double quotes around the remote table name when creating the nickname. That is required because in PostgreSQL table names are always cataloged in lower case. If we hadn’t escaped the remote table name it would use all capitalized LOW_VAR which does not exist in the remote catalog.

      The role of the optimizer in federated queries

      As with any other query in BigSQL the optimizer will be used on federated queries. The unique factor for federation, is that queries can be partially or completely run at the remote data source.

      For those not very familiar with the BigSQL optimizer, it’s responsibilities include breaking a query into subsections and generating a series of alternative execution plans, each with a cost. The plan with the lower cost is than chosen for execution.

      Usually, calculating filters and aggregate functions at the remote source would be more cost effective, but that is not always the case.

      When generating the different subsections for a federated query, and later, calculating the partial and total costs of the query, some factors come into consideration. Listed here in no particular order:

      • Does the remote data source have an equivalent function?
      • How much data is in the table and how much of it will be returned?
      • How fast is the network connection between our two servers?
      • What are the remote server specifications?

      Does the remote data source have an equivalent function?

      Function equivalence has two aspects that need to be considered: the actual functionality, and the arguments/return types.

      As you would expect, a query using a function that does not exist and does not have an equivalent will not be run in the remote server. We refer to these equivalences as function mappings.
      For the data sources listed as supported in the knowledge center default function mappings exist. For example there is a default function mapping between BigSQL’s POSSTR and Microsoft SQL Server’s CHARINDEX function.

      Generally speaking, if the behavior of the equivalent remote function is different, the query will not be sent. For example Oracle Mysql, uses a case insensitive collation. In a case insensitive collation ‘a’=’A’, this means functions that use comparisons at their base like max or count(distinct) will have a different behavior and so will not be sent to the remote server. There are very few documented exceptions to this rule, for example in MariaDB servers the division of two integer columns will result in a real rather than an integer (ie: In MariaDB 1/2 = 0.5) but if this operator is used in a filter it will still be sent to the remote server.

      To create a function mapping that does not exist by default you can use the create function mapping statement. The same statement will also allow you to disable a default mapping by setting option DISABLE to ‘Y’.

      How much data is in the table and how much of it will be returned?

      Statistics of the remote table like row count, max and min values are very important. This information, when put together with the next two considerations – network speed and remote server specification – allows for a very accurate calculation of the cost of sending the query versus fetching all the table rows and applying the filter locally.

      Nickname statistics are calculated by the NNSTAT function in the SYSPROC schema. This function can be called manually, but will also be run as part of auto runstats that is set to ON by default in the BIGSQL database.

      How fast is the network connection between our two servers?

      Network connection speed is one of the data points used when calculating the cost of a plan. It is quite intuitive, but let’s look at a simplified(*) fictitious example.

      Assume there is a Netezza server’s network connection with a know line speed of 1MB/s. That will be the max speed of communication even if BigSQL is connected to a faster network. The table the select statement is running on has 200 BIGINT columns. BINGINT type has a 63 bit precision, this means each row is roughly 1.5KB (200*63/8/1024). On such a system the maximum number of rows that can be transferred per second is 682 rows (1*1024/1.5).

      The selected table had a 10 thousand row cardinality.

      On a select * from table, all the rows need to be fetched. Just looking at the cost of transferring the data, it will take ~15 seconds (10000/682) for the data to be returned.

      If we applied a filter, for example select * from table where column1 > 2000 some rows should not be returned. Let’s say this particular filter returns 8000 rows. For simplicity sake let’s assume applying the filter on 10 thousand rows takes the same constant amount of time, represented by variable ‘p’, on both BigSQL and Netezza. If we don’t send the filter then we know the cost of data transfer is 15s (calculated on the last paragraph), adding up to a total cost of 15s + p. If we do send the filter than the cost is p + 12s (8000/682). In this case, the best performance would be achieved by having Netezza apply the filter before sending the data to BigSQL.

      When a sever is created, a default network speed of 2MB/s is defined by option COMM_RATE. If we know this value is different we can change this setting. It is important to remember that just as accurate information about the remote server will improve performance, wrong information might worsen it.

      (*) simplified because we will not be taking into account thing like network latency and TCP/IP packaging overhead, shared connections, etc.

      What are the remote server specifications?

      Server specifications like amount of CPU speed, and IO speed can both affect the processing capabilities of a computer. When calculating the cost of alternative plans for federation this information is used alongside the network cost to decide where queries should be run.

      By default, when creating a server, we assume the CPU speed and IO ratio to be the same as the BigSQL server’s. If that is not the case we would use options CPU_RATIO and IO_RATIO respectively, to set the comparative values between the BigSQL server and the remote server. Values greater than 1 mean the BigSQL server is faster than the remote server. For example, setting IO_RATIO to 2 indicates the BigSQL IO ratio is two times faster than the remote server’s. Conversely values lesser than 1 indicate the BigSQL server is slower by that magnitude. Just like for the network speed setting, incorrect information in these values might cause performance degradation.

      Practical example

      Now that you understand the theory behind it let’s get back to out practical example.

      Catalog Review

      The server type we are connecting to, uses the default ODBC wrapper. Because we did not specify a wrapper on the create statements, the create server implicitly created the default ODBC wrapper.

      [][bigsql] 1> select * from syscat.servers;
      | ODBC | POST_1 | POSTGRESQL | 9.2 | [NULL] |
      1 rows in results(first row: 0.028s; total: 0.031s)

      We will be looking at function performance so we need to know the server options. It is important to remember that options that are not specified in the create server statement do not appear in the SERVEROPTIONS catalog view and were set to their default value. PUSHDOWN is a particularly important option when looking at function performance as it tells the server if function pushdown should be attempted or not. The default value for PUSHDOWN is ‘Y’.

      [][bigsql] 1> select * from syscat.serveroptions order by servername;
      | [NULL] | POST_1 | [NULL] | [NULL] | 2018-04-04 08:31:48.500 | HOST | | SQL180404083148500 | [NULL] |
      | [NULL] | POST_1 | [NULL] | [NULL] | 2018-04-04 08:31:48.501 | PORT | 5432 | SQL180404083148501 | [NULL] |
      | [NULL] | POST_1 | [NULL] | [NULL] | 2018-04-04 08:31:48.502 | DBNAME | feddb | SQL180404083148502 | [NULL] |
      6 rows in results(first row: 0.007s; total: 0.010s)

      The table

      [][bigsql] 1> \describe post_low_var
      | BIGSQL | ID | INTEGER | 10 | 0 | YES |
      | BIGSQL | SINT | SMALLINT | 5 | 0 | YES |
      | BIGSQL | DBL | DOUBLE | 53 | [NULL] | YES |
      | BIGSQL | COL_VCH | VARCHAR | 10 | [NULL] | YES |
      | BIGSQL | COL_VCH2 | VARCHAR | 200 | [NULL] | YES |

      [][bigsql] 1> select count(*) from post_low_var;
      | 1 |
      | 100 |
      1 row in results(first row: 0.007s; total: 0.010s)

      [][bigsql] 1> select * from post_low_var limit 10;
      | ID | SINT | DBL | COL_VCH | COL_VCH2 |
      | 1 | 1 | 1.00000 | 1 | 1 |
      | 2 | 2 | 2.00000 | 2 | 2 |
      | 3 | 3 | 3.00000 | 3 | 3 |
      | 4 | 4 | 4.00000 | 4 | 4 |
      | 5 | 5 | 5.00000 | 5 | 5 |
      | 6 | 6 | 6.00000 | 6 | 6 |
      | 7 | 7 | 7.00000 | 7 | 7 |
      | 8 | 8 | 8.00000 | 8 | 8 |
      | 9 | 9 | 9.00000 | 9 | 9 |
      | 10 | 10 | 10.00000 | 10 | 10 |
      10 rows in results(first row: 0.025s; total: 0.032s)

      This table we are using is very simple it has 100 unique rows with values from 1 to 100.

      The query

      The function we will be looking at is the scalar function MOD. MOD calculates the remainder of dividing the first argument by the second number. For our data, the following query will have the same mod value for all rows.

      [][bigsql] 1> select mod(dbl,sint) from post_low_var where mod(dbl,sint)=0 limit 10;
      | 1 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      | .00000 |
      10 rows in results(first row: 0.007s; total: 0.009s)

      Pushdow Analysis

      We will check if any part of the query was executed at the remote server by looking at the query’s execution plan.

      Retrieve the plan

      In BigSQL the EXPLAIN statement allows us to see the execution plan chosen by the optimizer. To run explain on a query we created the explain tables by using the SYSINSTALLOBJECTS function in the SYSPROC schema.

      ok. (total: 3.589s)

      After that we can request the plan for our query

      [][bigsql] 1> explain plan for select mod(dbl,sint) from post_low_var where mod(dbl,sint)=0 limit 10;
      0 rows affected (total: 0.017s)

      Finally, we will output the plan using the db2exfmt utility. It is also possible to directly query the explain tables which where created in the SYSTOOLS schema, but I’ve always found it easier to look at the formatted plan.

      [bigsql@jabs1 ~]$ db2exfmt -d bigsql -o mod.plan -1
      DB2 Universal Database Version 11.1, 5622-044 (c) Copyright IBM Corp. 1991, 2016
      Licensed Material - Program Property of IBM
      IBM DATABASE 2 Explain Table Format Tool

      Connecting to the Database.
      Connect to Database Successful.
      Binding package - Bind was Successful
      Using SYSTOOLS schema for Explain tables.
      Output is in median.plan.
      Executing Connect Reset -- Connect Reset was Successful.

      The options passed tell the utility to connect to the BIGSQL database, get the latest explain requested and print the output to file mod.plan.

      Analyze the plan

      There are three sections in this file we want to look at. Sections will have a title that looks like this:

      Original Statement:

      The sections we want to look at are Extended Diagnostic Information, Access Plan, and finally Plan Details.

      Extended Diagnostic Information

      Extended Diagnostic Information:

      Diagnostic Identifier: 1
      Diagnostic Details: EXP0020W Table has no statistics. The table
      "BIGSQL "."POST_LOW_VAR" has not had runstats run
      on it. This may result in a sub-optimal access
      plan and poor performance.

      This section of the plan will hold information like whether statistics have been run. In this particular output we see the nickname we are using has not had it’s statistics run. It is not guaranteed that just because statistics are missing the pushdown will not happen, but to obtain the best performance possible, statistics should always be calculated.

      Access Plan

       Access Plan:
        Total Cost:             54.4243
        Query Degree:           1

      (   1)
      (   2)
      (   3)

      There are a couple noteworthy details in this section. One is the row count which can be found at the top of each node in the tree. Because we looked at the Extended Diagnostic Information we know we do not have statistics for this table and as such it is not surprising that the row count is incorrect. If statistics had been calculated these values should be the real values, or a very good approximation in the case of the filters.

      The next information we will need from this section is the number in parenthesis bellow the SHIP node. The SHIP node is the operator that is actually run at the remote data source and the number between parenthesis will point us to the correct item in the Plan Details section.

      Plan Details

      3) SHIP  : (Ship)
                Cumulative Total Cost:          150.369
                Cumulative CPU Cost:            1.83932e+06
                Cumulative I/O Cost:            6
                Cumulative Re-Total Cost:       0.346031
                Cumulative Re-CPU Cost:         1.72372e+06
                Cumulative Re-I/O Cost:         0
                Cumulative First Row Cost:      25.0105
                Cumulative Comm Cost:           0
                Cumulative First Comm Cost:     0
                Estimated Bufferpool Buffers:   6
                Remote Total Cost:      533.391
                Remote Communication Cost:      529.438

                CSERQY  : (Remote common subexpression)
                DSTSEVER: (Destination (ship to) server)
                        - (NULL).
                RMTQTXT : (Remote statement)
                        SELECT A0.

      The Plan Details section has a number of items, the item we are looking at is the SHIP item which we know is number 3 from the Access Plan section.

      Inside this item we find a couple subsections. In the output shown, Input Streams and Output Streams have been suppressed because they are not very relevant in terms of looking at function pushdown. Those two subsections would have some information on columns coming in/out to this node in the tree.

      The Subsection we want to look at is “Arguments”, specifically we want to look at the RMTQTXT argument. This argument is the statement that will be sent to the remote server. If you were to look at your remote server audit log, this is the query you would see. In this case we can see that we are doing a plain select on the column we use instead of using a function. This means we will be fetching all the rows and processing the function on BigSQL side.

      There are a two main reasons why this could be happening: The optimizer calculated that sending the filter to the remote server would be more costly, or the function does not have a default mapping.

      Function is more expensive if sent

      By setting server option DB2_MAXIMAL_PUSHDOWN to ‘Y’, we can force the optimizer to choose an execution plan that will push a function execution to the remote server as long as it is mapped regardless of cost. This option will not give you the best performance, it should only be used for investigation purposes like we are doing here.

      To make sure we have accurate statistics we will also update the statistics on our nickname.

      [][bigsql] 1> alter server post_1 options (add DB2_MAXIMAL_PUSHDOWN 'Y');
      0 rows affected (total: 0.019s)

      ok. (total: 0.752s)

      | Return Code | Param #2 |
      | 0 | Successful |

      On this NNSTAT call we defined almost all values so it would be easier to see what each of the arguments are. The two NULL values are table indexes and log name respectively. On any of the other arguments NULL means all. To successfully run this function you really only need to specify one of the three first arguments (server name, local schema or nickname).

      In this call we used method 2 as the collection method, because the remote server used does not have statistics on this table. If the remote server had statistics we could have used method 0. Method 0 would request the remote statistics instead of calculating them by doing queries like max, min etc to the table.

      Let’s regenerate the plan. As we expected our Extended Diagnostic Information is now empty

      Extended Diagnostic Information:

      No extended Diagnostic Information for this statement.

      Our remote statement though is still the same

      RMTQTXT : (Remote statement)
      SELECT A0."dbl", A0."sint" FROM "low_var" A0

      No default function mapping

      In cases like this, when after we set DB2_MAXIMAL_PUSHDOWN to ‘Y’ the function is still not pushed down, it is likely that the function we are using does not have a default mapping. When that is the case if we know there is function with equivalent behavior (this can be a system or user function) then we can create a function mapping. PostgreSQL does have a mod function so we will create a mapping between our local mod and the PostgreSQL mod function.

      [][bigsql] 1> create function mapping mymod_mapping for SYSIBM.mod(DOUBLE,SMALLINT) server post_1 options (remote_name 'mod')
      [][bigsql] 2> go
      0 rows affected (total: 0.011s)

      Now when we generate the execution plan we can see the full query will be sent to the remote data source

      RMTQTXT : (Remote statement)
      SELECT A0.C0 FROM (SELECT mod(A1."dbl", A1."sint") C0 FROM "low_var" A1 WHERE (mod(A1."dbl", A1."sint") = +0.00000000000000E+000) FETCH FIRST 10 ROWS ONLY) A0

      If at this point you are wondering ‘Why was this function not mapped in the first place?’, the reason is PostgreSQL implementation of the mod function does not support double precision as an argument. In fact if we try to run our original query with this function mapping in place this is the result we get:

      [][bigsql] 1> select mod(dbl,sint) from post_low_var where mod(dbl,sint)=0 limit 10;
      SQL Exception(s) Encountered:
      [State: 560BD][Code: -1822]: Unexpected error code "42000" received from data source "POST_1". Associated text and tokens are "ERROR: function mod(double precision, smallint) does not ".. SQLCODE=-1822, SQLSTATE=560BD, DRIVER=3.72.24

      As a final step let’s go ahead and drop this mapping and remove the DB2_MAXIMAL_PUSHDOWN option from our server.

      [][bigsql] 1> drop function mapping mymod_mapping;
      0 rows affected (total: 0.016s)

      [][bigsql] 1> alter server post_1 options (drop DB2_MAXIMAL_PUSHDOWN);
      0 rows affected (total: 0.010s)


      On part 1 of this two part series we looked at performance for functions. In particular we went through the exercise of looking at pushdown for a function, and creation of a function mapping.

      In our example it turned out the mapping we created had the unintended consequence of breaking functionality. This should emphasize that changing or creating function mappings, type mapping or server options are operations that should be very carefully though out or they might have the opposite effect of the intended.

      That said this should not discourage you. In fact for functions that do not exist by default, if we have a user defined function that implements the same functionality it might be worth creating a mapping as this could lead to increased performance. The same can be said about correctly informing the optimizer about the remote server which will help it come up with the most cost effective plan.

      On part two we will look at joining nicknames among themselves and with hadoop tables and see how Federation can take advantage of BigSQL worker nodes.

Join The Discussion

Your email address will not be published. Required fields are marked *