mirror of
https://github.com/apache/impala.git
synced 2025-12-25 02:03:09 -05:00
For Iceberg tables, when joining the data files with the delete files,
both of the current distribution modes (broadcast, partitioned) are
wasteful. The idea is that when we read a row from a delete file it
contains the name of the data file that this particular delete row is
referring to so if we knew where that data file is scheduled we could
directly send that delete file row there.
This patch enhances the scheduler to collect the information about
which data file is scheduled on which host. Since, the scan node for
the data files are on the same host as the Iceberg join node, we can
send the delete files directly to that specific host.
Functional testing:
- Re-run full test suite to check for regressions.
Performance testing:
1) Local machine: SELECT COUNT(1) FROM TPCH10_parquet.lineitem
Around 15% of the rows are deleted.
As the table is unpartitioned I got a small number of delete files with
relatively large size.
Query runtime decreased by ~80%
2) Local machine: SELECT COUNT(1) FROM TPCDS10_store_sales
Around 15% of the rows are deleted.
Table is partitioned that results more delete files but smaller in
size.
Query runtime decreased by ~50%
3) Performance testing in a multi-node with data stored on S3.
SELECT COUNT(1) FROM a scaled store_sales table having ~8.6B rows and
~15% are deleted.
Here we had 2 scenarios:
a) Table is written by Impala: One delete file row is sent exactly to
one host.
b) Table is written by Hive: Here apparently the data files are
bigger and one data file might be spread to multiple scan ranges.
As a result one delete file row might be sent to multiple hosts.
The time difference between the a) run is the time spent on
sending out more delete file rows.
- Results with 10-node
a) Runtime decreased by ~80%.
b) Runtime decreased by ~60%.
- Results with 20-node
a) Runtime decreased by ~65%.
b) Runtime decreased by ~42%.
- Results with 40-node
a) Runtime decreased by ~55%.
b) Runtime decreased by ~42%.
Change-Id: I212afd7c9e94551a1c50a40ccb0e3c1f7ecdf3d2
Reviewed-on: http://gerrit.cloudera.org:8080/20548
Reviewed-by: Tamas Mate <tmater@apache.org>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>