mirror of
https://github.com/apache/impala.git
synced 2025-12-19 09:58:28 -05:00
This patch mainly implement the querying of paimon data table
through JNI based scanner.
Features implemented:
- support column pruning.
The partition pruning and predicate push down will be submitted
as the third part of the patch.
We implemented this by treating the paimon table as normal
unpartitioned table. When querying paimon table:
- PaimonScanNode will decide paimon splits need to be scanned,
and then transfer splits to BE do the jni-based scan operation.
- We also collect the required columns that need to be scanned,
and pass the columns to Scanner for column pruning. This is
implemented by passing the field ids of the columns to BE,
instead of column position to support schema evolution.
- In the original implementation, PaimonJniScanner will directly
pass paimon row object to BE, and call corresponding paimon row
field accessor, which is a java method to convert row fields to
impala row batch tuples. We find it is slow due to overhead of
JVM method calling.
To minimize the overhead, we refashioned the implementation,
the PaimonJniScanner will convert the paimon row batches to
arrow recordbatch, which stores data in offheap region of
impala JVM. And PaimonJniScanner will pass the arrow offheap
record batch memory pointer to the BE backend.
BE PaimonJniScanNode will directly read data from JVM offheap
region, and convert the arrow record batch to impala row batch.
The benchmark shows the later implementation is 2.x better
than the original implementation.
The lifecycle of arrow row batch is mainly like this:
the arrow row batch is generated in FE,and passed to BE.
After the record batch is imported to BE successfully,
BE will be in charge of freeing the row batch.
There are two free paths: the normal path, and the
exception path. For the normal path, when the arrow batch
is totally consumed by BE, BE will call jni to fetch the next arrow
batch. For this case, the arrow batch is freed automatically.
For the exceptional path, it happends when query is cancelled, or memory
failed to allocate. For these corner cases, arrow batch is freed in the
method close if it is not totally consumed by BE.
Current supported impala data types for query includes:
- BOOLEAN
- TINYINT
- SMALLINT
- INTEGER
- BIGINT
- FLOAT
- DOUBLE
- STRING
- DECIMAL(P,S)
- TIMESTAMP
- CHAR(N)
- VARCHAR(N)
- BINARY
- DATE
TODO:
- Patches pending submission:
- Support tpcds/tpch data-loading
for paimon data table.
- Virtual Column query support for querying
paimon data table.
- Query support with time travel.
- Query support for paimon meta tables.
- WIP:
- Snapshot incremental read.
- Complex type query support.
- Native paimon table scanner, instead of
jni based.
Testing:
- Create tests table in functional_schema_template.sql
- Add TestPaimonScannerWithLimit in test_scanners.py
- Add test_paimon_query in test_paimon.py.
- Already passed the tpcds/tpch test for paimon table, due to the
testing table data is currently generated by spark, and it is
not supported by impala now, we have to do this since hive
doesn't support generating paimon table for dynamic-partitioned
tables. we plan to submit a separate patch for tpcds/tpch data
loading and associated tpcds/tpch query tests.
- JVM Offheap memory leak tests, have run looped tpch tests for
1 day, no obvious offheap memory increase is observed,
offheap memory usage is within 10M.
Change-Id: Ie679a89a8cc21d52b583422336b9f747bdf37384
Reviewed-on: http://gerrit.cloudera.org:8080/23613
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
55 lines
2.0 KiB
CMake
55 lines
2.0 KiB
CMake
##############################################################################
|
|
# Licensed to the Apache Software Foundation (ASF) under one
|
|
# or more contributor license agreements. See the NOTICE file
|
|
# distributed with this work for additional information
|
|
# regarding copyright ownership. The ASF licenses this file
|
|
# to you under the Apache License, Version 2.0 (the
|
|
# "License"); you may not use this file except in compliance
|
|
# with the License. You may obtain a copy of the License at
|
|
#
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
#
|
|
# Unless required by applicable law or agreed to in writing,
|
|
# software distributed under the License is distributed on an
|
|
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
# KIND, either express or implied. See the License for the
|
|
# specific language governing permissions and limitations
|
|
# under the License.
|
|
##############################################################################
|
|
|
|
# - Find Arrow (headers and libarrow.a) with ARROW_ROOT hinting a location
|
|
# This module defines
|
|
# ARROW_INCLUDE_DIR, directory containing headers
|
|
# ARROW_STATIC_LIB, path to libarrow.a
|
|
# ARROW_FOUND
|
|
set(ARROW_ROOT $ENV{IMPALA_TOOLCHAIN_PACKAGES_HOME}/arrow-$ENV{IMPALA_ARROW_VERSION})
|
|
|
|
set(ARROW_SEARCH_HEADER_PATHS ${ARROW_ROOT}/include)
|
|
|
|
set(ARROW_SEARCH_LIB_PATH ${ARROW_ROOT}/lib)
|
|
|
|
find_path(ARROW_INCLUDE_DIR NAMES arrow/api.h arrow/c/bridge.h PATHS
|
|
${ARROW_SEARCH_HEADER_PATHS}
|
|
# make sure we don't accidentally pick up a different version
|
|
NO_DEFAULT_PATH)
|
|
|
|
find_library(ARROW_STATIC_LIB NAMES libarrow.a libarrow_bundled_dependencies.a PATHS
|
|
${ARROW_SEARCH_LIB_PATH})
|
|
|
|
if(NOT ARROW_STATIC_LIB)
|
|
message(FATAL_ERROR "Arrow includes and libraries NOT found. "
|
|
"Looked for headers in ${ARROW_SEARCH_HEADER_PATHS}, "
|
|
"and for libs in ${ARROW_SEARCH_LIB_PATH}")
|
|
set(ARROW_FOUND FALSE)
|
|
else()
|
|
set(ARROW_FOUND TRUE)
|
|
endif ()
|
|
|
|
set(ARROW_FOUND ${ARROW_STATIC_LIB_FOUND})
|
|
|
|
mark_as_advanced(
|
|
ARROW_INCLUDE_DIR
|
|
ARROW_STATIC_LIB
|
|
ARROW_FOUND
|
|
)
|