Riza Suminto c5072807df IMPALA-12337: Implement delete orphan files for Iceberg table
This patch implements delete orphan files query for Iceberg table.

The following statement becomes available for Iceberg tables:
 - ALTER TABLE <tbl> EXECUTE remove_orphan_files(<timestamp>)

The bulk of implementation copies Hive's implementation of
org.apache.iceberg.actions.DeleteOrphanFiles interface (HIVE-27906,
6b2e21a93ef3c1776b689a7953fc59dbf52e4be4), which this patch rename to
ImpalaIcebergDeleteOrphanFiles.java. Upon execute(),
ImpalaIcebergDeleteOrphanFiles class instance will gather all URI of
valid data files and Iceberg metadata files using Iceberg API. These
valid URIs then will be compared to recursive file listing obtained
through Hadoop FileSystem API under table's 'data' and 'metadata'
directory accordingly. Any unmatched URI from FileSystem API listing
that has modification time less than 'olderThanTimestamp' parameter will
then be removed via Iceberg FileIO API of given Iceberg table. Note that
this is a destructive query that will wipe out any files within Iceberg
table's 'data' and 'metadata' directory that is not addressable by any
valid snapshots.

The execution happens in CatalogD via
IcebergCatalogOpExecutor.alterTableExecuteRemoveOrphanFiles(). CatalogD
supplied CatalogOpExecutor.icebergExecutorService_ as executor service
to execute the Iceberg API planFiles and FileIO API for deletion.

Also fixed toSql() implementation for all ALTER TABLE EXECUTE queries.

Testing:
- Add FE and EE tests.

Change-Id: I5979cdf15048d5a2c4784918533f65f32e888de0
Reviewed-on: http://gerrit.cloudera.org:8080/23042
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
2025-06-30 15:05:12 +00:00

Welcome to Impala

Lightning-fast, distributed SQL queries for petabytes of data stored in open data and table formats.

Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:

More about Impala

The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.

To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.

If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.

Supported Platforms

Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.

Supported OS Distributions

Impala runs on Linux systems only. The supported distros are

  • Ubuntu 16.04/18.04
  • CentOS/RHEL 7/8

Other systems, e.g. SLES12, may also be supported but are not tested by the community.

Export Control Notice

This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.

Build Instructions

See Impala's developer documentation to get started.

Detailed build notes has some detailed information on the project layout and build.

Description
Apache Impala
Readme 288 MiB
Languages
C++ 49.6%
Java 29.9%
Python 14.6%
JavaScript 1.4%
C 1.2%
Other 3.2%