mirror of
https://github.com/apache/impala.git
synced 2026-01-05 21:00:54 -05:00
IMPALA-9741: Support querying Iceberg table by impala
This patch mainly realizes the querying of iceberg table through impala,
we can use the following sql to create an external iceberg table:
CREATE EXTERNAL TABLE default.iceberg_test (
level string,
event_time timestamp,
message string,
)
STORED AS ICEBERG
LOCATION 'hdfs://xxx'
TBLPROPERTIES ('iceberg_file_format'='parquet');
Or just including table name and location like this:
CREATE EXTERNAL TABLE default.iceberg_test
STORED AS ICEBERG
LOCATION 'hdfs://xxx'
TBLPROPERTIES ('iceberg_file_format'='parquet');
'iceberg_file_format' is the file format in iceberg, currently only
support PARQUET, other format would be supported in the future. And
if you don't specify this property in your SQL, default file format
is PARQUET.
We achieved this function by treating the iceberg table as normal
unpartitioned hdfs table. When querying iceberg table, we pushdown
partition column predicates to iceberg to decide which data files
need to be scanned, and then transfer this information to BE to
do the real scan operation.
Testing:
- Unit test for Iceberg in FileMetadataLoaderTest
- Create table tests in functional_schema_template.sql
- Iceberg table query test in test_scanners.py
Change-Id: I856cfee4f3397d1a89cf17650e8d4fbfe1f2b006
Reviewed-on: http://gerrit.cloudera.org:8080/16143
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This commit is contained in:
committed by
Impala Public Jenkins
parent
fe6e625747
commit
fb6d96e001
@@ -2933,3 +2933,29 @@ case when id % 2 = 0 then cast(timestamp_col as date)
|
||||
else cast(cast(timestamp_col as date) + interval 5 days as date) end date_col
|
||||
FROM {db_name}{db_suffix}.alltypes where id < 500;
|
||||
====
|
||||
---- DATASET
|
||||
functional
|
||||
---- BASE_TABLE_NAME
|
||||
iceberg_partitioned
|
||||
---- CREATE
|
||||
CREATE EXTERNAL TABLE IF NOT EXISTS {db_name}{db_suffix}.{table_name}
|
||||
STORED AS ICEBERG
|
||||
LOCATION '/test-warehouse/iceberg_test/iceberg_partitioned'
|
||||
TBLPROPERTIES('iceberg_file_format'='parquet');
|
||||
---- DEPENDENT_LOAD
|
||||
`hadoop fs -mkdir -p /test-warehouse/iceberg_test && \
|
||||
hadoop fs -put -f ${IMPALA_HOME}/testdata/data/iceberg_test/iceberg_partitioned /test-warehouse/iceberg_test/
|
||||
====
|
||||
---- DATASET
|
||||
functional
|
||||
---- BASE_TABLE_NAME
|
||||
iceberg_non_partitioned
|
||||
---- CREATE
|
||||
CREATE EXTERNAL TABLE IF NOT EXISTS {db_name}{db_suffix}.{table_name}
|
||||
STORED AS ICEBERG
|
||||
LOCATION '/test-warehouse/iceberg_test/iceberg_non_partitioned'
|
||||
TBLPROPERTIES('iceberg_file_format'='parquet');
|
||||
---- DEPENDENT_LOAD
|
||||
`hadoop fs -mkdir -p /test-warehouse/iceberg_test && \
|
||||
hadoop fs -put -f ${IMPALA_HOME}/testdata/data/iceberg_test/iceberg_non_partitioned /test-warehouse/iceberg_test/
|
||||
====
|
||||
|
||||
Reference in New Issue
Block a user