Files
impala/testdata/datasets
Gabor Kaszab 7e0feb4a8e IMPALA-11701 Part1: Don't push down predicates to scanner if already applied by Iceberg
We push down predicates to Iceberg that uses them to filter out files
when getting the results of planFiles(). Using the
FileScanTask.residual() function we can find out if we have to use
the predicates to further filter the rows of the given files or if
Iceberg has already performed all the filtering.
Basically if we only filter on IDENTITY-partition columns then Iceberg
can filter the files and using these filters in Impala wouldn't filter
any more rows from the output (assuming that no partition evolution was
performed on the table).

An additional benefit of not pushing down no-op predicates to the
scanner is that we can potentially materialize less slots.
For example:

SELECT count(1) from iceberg_tbl where part_col = 10;

Another additional benefit comes with count(*) queries. If all the
predicates are skipped from being pushed to Impala's scanner for a
count(*) query then the Parquet scanner can go to an optimized path
where it uses stats instead of reading actual data to answer the query.

In the above query Iceberg filters the files using the predicate on
a partition column and then there won't be any need to materialize
'part_col' in Impala, nor to push down the 'part_col = 10' predicate.

Note, this is an all or nothing approach, meaning that assuming N
number of predicates we either push down all predicates to the scanner
or none of them. There is a room for improvement to identify a subset
of the predicates that we still have to push down to the scanner.
However, for this we'd need a mapping between Impala predicates and the
predicates returned by Iceberg's FileScanTask.residual() function that
would significantly increase the complexity of the relevant code.

Testing:
  - Some existing tests needed some extra care as they were checking
    for predicates being pushed down to the scanner, but with this
    patch not all of them are pushed down. For these tests I added some
    extra predicates to achieve that all of the predicates are pushed
    down to the scanner.
  - Added a new planner test suite for checking how predicate push down
    works with Iceberg tables.

Change-Id: Icfa80ce469cecfcfbcd0dcb595a6b04b7027285b
Reviewed-on: http://gerrit.cloudera.org:8080/19534
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2023-04-21 15:22:17 +00:00
..

This directory contains Impala test data sets. The directory layout is structured as follows:

datasets/
   <data set>/<data set>_schema_template.sql
   <data set>/<data files SF1>/data files
   <data set>/<data files SF2>/data files

Where SF is the scale factor controlling data size. This allows for scaling the same schema to
different sizes based on the target test environment.

The schema template SQL files have the following format:

  The goal is to provide a single place to define a table + data files
  and have the schema and data load statements generated for each combination of file
  format, compression, etc. The way this works is by specifying how to create a
  'base table'. The base table can be used to generate tables in other file formats
  by performing the defined INSERT / SELECT INTO statement. Each new table using the
  file format/compression combination needs to have a unique name, so all the
  statements are pameterized on table name.
  The template file is read in by the 'generate_schema_statements.py' script to
  to generate all the schema for the Impala benchmark tests.

  Each table is defined as a new section in the file with the following format:

  ====
  ---- SECTION NAME
  section contents
  ...
  ---- ANOTHER SECTION
  ... section contents
  ---- ... more sections...

  Note that tables are delimited by '====' and that even the first table in the
  file must include this header line.

  The supported section names are:

  DATASET
      Data set name - Used to group sets of tables together
  BASE_TABLE_NAME
      The name of the table within the database
  CREATE
      Explicit CREATE statement used to create the table (executed by Impala)
  CREATE_HIVE
      Same as the above, but will be executed by Hive instead. If specified,
      'CREATE' must not be specified.
  CREATE_KUDU
      Customized CREATE TABLE statement used to create the table for Kudu-specific
      syntax.

  COLUMNS
  PARTITION_COLUMNS
  ROW_FORMAT
  HBASE_COLUMN_FAMILIES
  TABLE_PROPERTIES
  HBASE_REGION_SPLITS
      If no explicit CREATE statement is provided, a CREATE statement is generated
      from these sections (see 'build_table_template' function in
      'generate-schema-statements.py' for details)

  ALTER
      A set of ALTER statements to be executed after the table is created
      (typically to add partitions, but may also be used for other settings that
      cannot be specified directly in the CREATE TABLE statement).

      These statements are ignored for HBase and Kudu tables.

  LOAD
      The statement used to load the base (text) form of the table. This is
      typically a LOAD DATA statement.

  DEPENDENT_LOAD
  DEPENDENT_LOAD_KUDU
  DEPENDENT_LOAD_HIVE
  DEPENDENT_LOAD_ACID
      Statements to be executed during the "dependent load" phase. These statements
      are run after the initial (base table) load is complete.

  HIVE_MAJOR_VERSION
       The required major version of Hive for this table. If the major version
       of Hive at runtime does not exactly match the version specified in this section,
       the table will be skipped.

       NOTE: this is not a _minimum_ version -- if HIVE_MAJOR_VERSION specifies '2',
                   the table will _not_ be loaded/created on Hive 3.