IMPALA-14092 Part2: Support querying of paimon data table via JNI

This patch mainly implement the querying of paimon data table
through JNI based scanner.

Features implemented:
- support column pruning.
The partition pruning and predicate push down will be submitted
as the third part of the patch.

We implemented this by treating the paimon table as normal
unpartitioned table. When querying paimon table:
- PaimonScanNode will decide paimon splits need to be scanned,
  and then transfer splits to BE do the jni-based scan operation.

- We also collect the required columns that need to be scanned,
  and pass the columns to Scanner for column pruning. This is
  implemented by passing the field ids of the columns to BE,
  instead of column position to support schema evolution.

- In the original implementation, PaimonJniScanner will directly
  pass paimon row object to BE, and call corresponding paimon row
  field accessor, which is a java method to convert row fields to
  impala row batch tuples. We find it is slow due to overhead of
  JVM method calling.
  To minimize the overhead, we refashioned the implementation,
  the PaimonJniScanner will convert the paimon row batches to
  arrow recordbatch, which stores data in offheap region of
  impala JVM. And PaimonJniScanner will pass the arrow offheap
  record batch memory pointer to the BE backend.
  BE PaimonJniScanNode will directly read data from JVM offheap
  region, and convert the arrow record batch to impala row batch.

  The benchmark shows the later implementation is 2.x better
  than the original implementation.

  The lifecycle of arrow row batch is mainly like this:
  the arrow row batch is generated in FE,and passed to BE.
  After the record batch is imported to BE successfully,
  BE will be in charge of freeing the row batch.
  There are two free paths: the normal path, and the
  exception path. For the normal path, when the arrow batch
  is totally consumed by BE, BE will call jni to fetch the next arrow
  batch. For this case, the arrow batch is freed automatically.
  For the exceptional path, it happends when query  is cancelled, or memory
  failed to allocate. For these corner cases, arrow batch is freed in the
  method close if it is not totally consumed by BE.

Current supported impala data types for query includes:
- BOOLEAN
- TINYINT
- SMALLINT
- INTEGER
- BIGINT
- FLOAT
- DOUBLE
- STRING
- DECIMAL(P,S)
- TIMESTAMP
- CHAR(N)
- VARCHAR(N)
- BINARY
- DATE

TODO:
    - Patches pending submission:
        - Support tpcds/tpch data-loading
          for paimon data table.
        - Virtual Column query support for querying
          paimon data table.
        - Query support with time travel.
        - Query support for paimon meta tables.
    - WIP:
        - Snapshot incremental read.
        - Complex type query support.
        - Native paimon table scanner, instead of
          jni based.

Testing:
    - Create tests table in functional_schema_template.sql
    - Add TestPaimonScannerWithLimit in test_scanners.py
    - Add test_paimon_query in test_paimon.py.
    - Already passed the tpcds/tpch test for paimon table, due to the
      testing table data is currently generated by spark, and it is
      not supported by impala now, we have to do this since hive
      doesn't support generating paimon table for dynamic-partitioned
      tables. we plan to submit a separate patch for tpcds/tpch data
      loading and associated tpcds/tpch query tests.
    - JVM Offheap memory leak tests, have run looped tpch tests for
      1 day, no obvious offheap memory increase is observed,
      offheap memory usage is within 10M.

Change-Id: Ie679a89a8cc21d52b583422336b9f747bdf37384
Reviewed-on: http://gerrit.cloudera.org:8080/23613
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
This commit is contained in:
jichen0919
2025-10-31 12:32:48 +08:00
committed by Riza Suminto
parent f8d21f34bc
commit 7e29ac23da
130 changed files with 3604 additions and 52 deletions

View File

@@ -334,6 +334,11 @@ struct TColumn {
// Key and value field id for Iceberg column with Map type.
22: optional i32 iceberg_field_map_key_id
23: optional i32 iceberg_field_map_value_id
// The followings are Paimon-specific column properties,
// will reuse the iceberg_field_id, is_key, is_nullable
// for Paimon table.
26: optional bool is_paimon_column
}
// Represents an HDFS file in a partition.

View File

@@ -57,6 +57,7 @@ enum TPlanNodeType {
TUPLE_CACHE_NODE = 20
SYSTEM_TABLE_SCAN_NODE = 21
ICEBERG_MERGE_NODE = 22
PAIMON_SCAN_NODE=23
}
// phases of an execution node
@@ -417,6 +418,28 @@ struct TSystemTableScanNode {
2: required CatalogObjects.TSystemTableName table_name
}
struct TPaimonJniScanParam {
// Serialized paimon api table object.
1: required binary paimon_table_obj
// Thrift serialized splits for the Jni Scanner.
2: required list<binary> splits
// Field id list for projection.
3: required list<i32> projection
// mem limit from backend.
// not set means no limit.
4: optional i64 mem_limit_bytes
// arrow batch size
5: optional i32 batch_size
// fragment id
6: Types.TUniqueId fragment_id;
}
struct TPaimonScanNode {
1: required Types.TTupleId tuple_id
2: required binary paimon_table_obj
3: required string table_name;
}
struct TEqJoinCondition {
// left-hand side of "<a> = <b>"
1: required Exprs.TExpr left;
@@ -838,6 +861,7 @@ struct TPlanNode {
28: optional TTupleCacheNode tuple_cache_node
29: optional TSystemTableScanNode system_table_scan_node
31: optional TPaimonScanNode paimon_table_scan_node
}
// A flattened representation of a tree of PlanNodes, obtained by depth-first

View File

@@ -77,8 +77,10 @@ struct TScalarType {
struct TStructField {
1: required string name
2: optional string comment
// Valid for Iceberg tables
// Valid for Iceberg and Paimon tables.
3: optional i32 field_id
// Valid for paimon tables.
4: optional bool is_nullable
}
struct TTypeNode {