Files
impala/testdata/datasets
Csaba Ringhofer 7ca11dfc7f IMPALA-9482: Support for BINARY columns
This patch adds support for BINARY columns for all table formats with
the exception of Kudu.

In Hive the main difference between STRING and BINARY is that STRING is
assumed to be UTF8 encoded, while BINARY can be any byte array.
Some other differences in Hive:
- BINARY can be only cast from/to STRING
- Only a small subset of built-in STRING functions support BINARY.
- In several file formats (e.g. text) BINARY is base64 encoded.
- No NDV is calculated during COMPUTE STATISTICS.

As Impala doesn't treat STRINGs as UTF8, BINARY and STRING become nearly
identical, especially from the backend's perspective. For this reason,
BINARY is implemented a bit differently compared to other types:
while the frontend treats STRING and BINARY as two separate types, most
of the backend uses PrimitiveType::TYPE_STRING for BINARY too, e.g.
in SlotDesc. Only the following parts of backend need to differentiate
between STRING and BINARY:
- table scanners
- table writers
- HS2/Beeswax service
These parts have access to column metadata, which allows to add special
handling for BINARY.

Only a very few builtins are allowed for BINARY at the moment:
- length
- min/max/count
- coalesce and similar "selector" functions
Other STRING functions can be only used by casting to STRING first.
Adding support for more of these functions is very easy, as simply
the BINARY type has to be "connected" to the already existing STRING
function's signature. Functions where the result depends on utf8_mode
need to ensure that with BINARY it always works as if utf8_mode=0 (for
example length() is mapped to bytes() as length count utf8 chars if
utf8_mode=1).

All kinds of UDFs (native, Hive legacy, Hive generic) support BINARY,
though in case of legacy Hive UDFs it is only supported if the argument
and return types are set explicitely to ensure backward compatibility.
See IMPALA-11340 for details.

The original plan was to behave as close to Hive as possible, but I
realized that Hive has more relaxed casting rules than Impala, which
led to STRING<->BINARY casts being necessary in more cases in Impala.
This was needed to disallow passing a BINARY to functions that expect
a STRING argument. An example for the difference is that in
INSERT ... VALUES () string literals need to be explicitly cast to
BINARY, while this is not needed in Hive.

Testing:
- Added functional.binary_tbl for all file formats (except Kudu)
  to test scanning.
- Removed functional.unsupported_types and related tests, as now
  Impala supports all (non-complex) types that Hive does.
- Added FE/EE tests mainly based on the ones added to the DATE type

Change-Id: I36861a9ca6c2047b0d76862507c86f7f153bc582
Reviewed-on: http://gerrit.cloudera.org:8080/16066
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2022-08-19 13:55:42 +00:00
..

This directory contains Impala test data sets. The directory layout is structured as follows:

datasets/
   <data set>/<data set>_schema_template.sql
   <data set>/<data files SF1>/data files
   <data set>/<data files SF2>/data files

Where SF is the scale factor controlling data size. This allows for scaling the same schema to
different sizes based on the target test environment.

The schema template SQL files have the following format:

  The goal is to provide a single place to define a table + data files
  and have the schema and data load statements generated for each combination of file
  format, compression, etc. The way this works is by specifying how to create a
  'base table'. The base table can be used to generate tables in other file formats
  by performing the defined INSERT / SELECT INTO statement. Each new table using the
  file format/compression combination needs to have a unique name, so all the
  statements are pameterized on table name.
  The template file is read in by the 'generate_schema_statements.py' script to
  to generate all the schema for the Imapla benchmark tests.

  Each table is defined as a new section in the file with the following format:

  ====
  ---- SECTION NAME
  section contents
  ...
  ---- ANOTHER SECTION
  ... section contents
  ---- ... more sections...

  Note that tables are delimited by '====' and that even the first table in the
  file must include this header line.

  The supported section names are:

  DATASET
      Data set name - Used to group sets of tables together
  BASE_TABLE_NAME
      The name of the table within the database
  CREATE
      Explicit CREATE statement used to create the table (executed by Impala)
  CREATE_HIVE
      Same as the above, but will be executed by Hive instead. If specified,
      'CREATE' must not be specified.
  CREATE_KUDU
      Customized CREATE TABLE statement used to create the table for Kudu-specific
      syntax.

  COLUMNS
  PARTITION_COLUMNS
  ROW_FORMAT
  HBASE_COLUMN_FAMILIES
  TABLE_PROPERTIES
  HBASE_REGION_SPLITS
      If no explicit CREATE statement is provided, a CREATE statement is generated
      from these sections (see 'build_table_template' function in
      'generate-schema-statements.py' for details)

  ALTER
      A set of ALTER statements to be executed after the table is created
      (typically to add partitions, but may also be used for other settings that
      cannot be specified directly in the CREATE TABLE statement).

      These statements are ignored for HBase and Kudu tables.

  LOAD
      The statement used to load the base (text) form of the table. This is
      typically a LOAD DATA statement.

  DEPENDENT_LOAD
  DEPENDENT_LOAD_KUDU
  DEPENDENT_LOAD_HIVE
  DEPENDENT_LOAD_ACID
      Statements to be executed during the "dependent load" phase. These statements
      are run after the initial (base table) load is complete.

  HIVE_MAJOR_VERSION
       The required major version of Hive for this table. If the major version
       of Hive at runtime does not exactly match the version specified in this section,
       the table will be skipped.

       NOTE: this is not a _minimum_ version -- if HIVE_MAJOR_VERSION specifies '2',
                   the table will _not_ be loaded/created on Hive 3.