It's not that easy to find log files of a custom-cluster test. All
custom-cluster tests use the same log dir and the test output just shows
the symlink of the log files, e.g. "Starting State Store logging to
.../logs/custom_cluster_tests/statestored.INFO".
This patch prints the actual log file names after the cluster launchs.
An example output:
15:17:19 MainThread: Starting State Store logging to /tmp/statestored.INFO
15:17:19 MainThread: Starting Catalog Service logging to /tmp/catalogd.INFO
15:17:19 MainThread: Starting Impala Daemon logging to /tmp/impalad.INFO
15:17:19 MainThread: Starting Impala Daemon logging to /tmp/impalad_node1.INFO
15:17:19 MainThread: Starting Impala Daemon logging to /tmp/impalad_node2.INFO
...
15:17:24 MainThread: Total wait: 2.54s
15:17:24 MainThread: Actual log file names:
15:17:24 MainThread: statestored.INFO -> statestored.quanlong-Precision-3680.quanlong.log.INFO.20251216-151719.1094348
15:17:24 MainThread: catalogd.INFO -> catalogd.quanlong-Precision-3680.quanlong.log.INFO.20251216-151719.1094368
15:17:24 MainThread: impalad.INFO -> impalad.quanlong-Precision-3680.quanlong.log.INFO.20251216-151719.1094466
15:17:24 MainThread: impalad_node1.INFO -> impalad.quanlong-Precision-3680.quanlong.log.INFO.20251216-151719.1094468
15:17:24 MainThread: impalad_node2.INFO -> impalad.quanlong-Precision-3680.quanlong.log.INFO.20251216-151719.1094470
15:17:24 MainThread: Impala Cluster Running with 3 nodes (3 coordinators, 3 executors).
Tests
- Ran the script locally.
- Ran a failed custom-cluster test and verified the actual file names
are printed in the output.
Change-Id: Id76c0a8bdfb221ab24ee315e2e273abca4257398
Reviewed-on: http://gerrit.cloudera.org:8080/23781
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Quanlong Huang <huangquanlong@gmail.com>
TestMetadataReplicas.test_catalog_restart creates a db and an underlying
table in Hive, then expects an INVALIDATE command can bring up the table
in impalads. The test runs in the legacy catalogd mode that has the bug
of IMPALA-12103. So if the INVALIDATE command runs in a state that
catalgod has the db in cache but the db doesn't show up in impalad's
cache yet, catalogd will just return the table and impalad will skip it
due to db not exists. Then the above assertion fails.
The db is added in catalogd cache by processing the CREATE_DATABASE HMS
event, which is asynchronous with executing the INVALIDATE command. If
the command is triggered before that, the test passes. If the command is
triggered after that, the test fails.
When the test was written, we don't have HMS event processing yet. It's
expected that the db is also added in catalogd by the INVALIDATE
command. To deflake the issue, this patch disables HMS event processing
in this test, so catalogd always has a consistent cache with impalad
when executing the INVALIDATE command.
This patch also changes the log level of a log in ImpaladCatalog to warn
if a table is not added due to its db is missing.
Tests:
- Ran the test locally 10 times.
Change-Id: I2d17404cc8093eacf9b51df3d22caf5cbb6a61a9
Reviewed-on: http://gerrit.cloudera.org:8080/23798
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Beeswax protocol has been due for deprecation in a long time. This patch
remove BEESWAX from create_client_protocol_dimension(). This will limit
protocol dimension to [HS2, HS2_HTTP] by default. It is still possible
to include BEESWAX again for testing if DEFAULT_TEST_PROTOCOL env var is
set to 'beeswax', such as:
DEFAULT_TEST_PROTOCOL=beeswax impala-py.test custom_cluster/test_ipv6.py
This patch does not disable beeswax server yet. Some tests that
specifically test against beeswax protocol, such as test_beeswax.py,
will continue to work. ImpalaTestSuite.beeswax_client also remain
unchanged.
Testing:
Run following command and confirm that beeswax protocol is skipped.
impala-py.test --collect-only --exploration=exhaustive \
custom_cluster/test_ipv6.py
Change-Id: I3cff79f59305b5d44944804ed1f1b92838575495
Reviewed-on: http://gerrit.cloudera.org:8080/23076
Reviewed-by: Jason Fehr <jfehr@cloudera.com>
Tested-by: Riza Suminto <riza.suminto@cloudera.com>
When catalogd runs with --start_hms_server=true, it services all the HMS
endpoints so that any HMS compatible client would be able to use
catalogd as a metadata cache. For all the DDL/DML requests, catalogd
just delegates them to HMS APIs without reloading related metadata in
the cache. For read requests like get_table_req, catalogd serves them
from its cache which could be stale.
There is a flag, invalidate_hms_cache_on_ddls, to decide whether to
explicitly invalidate the table when catalogd delegates a DDL/DML on the
table to HMS. test_cache_valid_on_nontransactional_table_ddls is a test
verifying that when invalidate_hms_cache_on_ddls=false, the cache is not
updated so should have stale metadata.
However, there are HMS events generated from invoking the HMS APIs. Even
when invalidate_hms_cache_on_ddls=false, catalogd can still update its
cache when processing the corresponding HMS events. The test fails when
its check is done after catalogd applies the event (so the cache is
up-to-date). If the check is done before that, the test passes.
This patch deflakes the test by explicitly disabling event processing.
Also updates the description of invalidate_hms_cache_on_ddls to mention
the impact of event processing.
Tests:
- Ran the test locally 100 times.
Change-Id: Ib1ffc11a793899a0dbdb009bf2ac311117f2318e
Reviewed-on: http://gerrit.cloudera.org:8080/23792
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
test_event_processor_status
When hierarchical event processing is enabled, there is no info about
the current event batch shown in the /events page. Note that event
batches are dispatched and processed later in parallel. The current
event batch info is actually showing the current batch that is being
dispatched which won't take long.
This patch skips checking the current event batch info when hierarchical
event processing is enabled.
Tests
- Verified that test runs fine locally.
Change-Id: I2df24d2fd3b028a84d557e70141e68aa234908d4
Reviewed-on: http://gerrit.cloudera.org:8080/23790
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This reverts commit 52b87fcefd.
The original commit caused an issue when Impala is deployed together
with Apache Atlas. Coordinator failed to start with error message:
java.lang.NoClassDefFoundError: org/apache/logging/log4j/core/Layout
Solved minor conflict in impala-config.sh due to IMPALA-14478 applied
after IMPALA-14454.
Change-Id: I77127db8d833c675c18c30eb3d6542ca906cd2a9
Reviewed-on: http://gerrit.cloudera.org:8080/23788
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Configure separate compile and link pools for ninja. Configures link
parallelism based on expected memory use, which can be reduced by
setting IMPALA_MINIMAL_DEBUG_INFO=true or IMPALA_SPLIT_DEBUG_INFO=true.
Adds IMPALA_MAKE_CMD to simplify using the ninja build tool for all make
operations in scripts. Install ninja on Ubuntu. Adds a '-make' option to
buildall.sh to force using 'make'.
Adds MOLD_JOBS=1 to avoid overloading the system when trying 'mold' and
linking test binaries. However 'mold' is not selected as the default
due to test failures around SASL/GSSAPI (see IMPALA-14527).
Switches bin/jenkins/all-tests.sh to use ninja and removes the guard in
bootstrap_development.sh limiting IMPALA_BUILD_THREADS as it's no longer
needed with ninja.
SKIP_BE_TEST_PATTERN in run-backend-tests is unused (only used with
TARGET_FILESYSTEM=local) so I don't attempt to make it work with ninja.
Tested with local 'IMPALA_SPLIT_DEBUG_INFO=true buildall.sh -skiptests'
with default (make) and IMPALA_MAKE_CMD=ninja.
Change-Id: I0952dc19ace5c9c42bed0d2ffb61499656c0a2db
Reviewed-on: http://gerrit.cloudera.org:8080/23572
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Reviewed-by: Pranav Lodha <pranav.lodha@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
TestPostgresJdbcTables.test_postgres_jdbc_tables uses hardcoded paths
for JDBC driver URLs:
"/test-warehouse/data-sources/jdbc-drivers/postgresql-jdbc.jar".
This doesn't work correctly when running on Ozone where we need the
prefix of "ofs://localhost:9862/impala".
This patch fixes the issue by constructing the driver URL with
FILESYSTEM_PREFIX which is "ofs://localhost:9862/impala" on Ozone.
See more in bin/impala-config.sh about how it's set for different
filesystems.
Tests:
- Ran the test on Ozone.
Change-Id: Ie0c4368b3262d4dcb9e1c05475506411be2e2ef5
Reviewed-on: http://gerrit.cloudera.org:8080/23787
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Previously, running ALTER TABLE <table> CONVERT TO ICEBERG on an Iceberg
table produced an error. This patch fixes that, so the statement will do
nothing when called on an Iceberg table and return with 'Table has
already been migrated.' message.
This is achieved by adding a new flag to StatementBase to signal when a
statement ends up NO_OP, if that's true, the new TStmtType::NO_OP will
be set as TExecRequest's type and noop_result can be used to set result
from Frontend-side.
Tests:
* extended fe and e2e tests
Change-Id: I41ecbfd350d38e4e3fd7b813a4fc27211d828f73
Reviewed-on: http://gerrit.cloudera.org:8080/23699
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Peter Rozsa <prozsa@cloudera.com>
Previously, `BaseScalarColumnReader::levels_readahead_` was not reset
when the reader did not do page filtering. If a query selected the last
row containing a collection value in a row group, `levels_readahead_`
would be set and would not be reset when advancing to the next row
group without page filtering. As a result, trying to skip collection
values at the start of the next row group would cause a check failure.
This patch fixes the failure by resetting `levels_readahead_` in
`BaseScalarColumnReader::Reset()`, which is always called when advancing
to the next row group.
`levels_readahead_` is also moved out of the "Members used for page
filtering" section as the variable is also used in late materialization.
Testing:
- Added an E2E test for the fix.
Change-Id: Idac138ffe4e1a9260f9080a97a1090b467781d00
Reviewed-on: http://gerrit.cloudera.org:8080/23779
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
When hierarchical event processing is enabled, there is no info about
the current event batch shown in the /events page. Note that event
batches are dispatched and processed later in parallel. The current
event batch info is actually showing the current batch that is being
dispatched which won't take long.
This patch skips checking the current event batch info when hierarchical
event processing is enabled. A new method,
is_hierarchical_event_processing_enabled(), is added in
ImpalaTestClusterProperties for the check. Also fixes
is_event_polling_enabled() to accept float values of
hms_event_polling_interval_s and adds the missing raise statement when
it fails to parse the flags.
Tests
- Ran the test locally.
Change-Id: Iffb84304a4096885492002b781199051aaa4fbb0
Reviewed-on: http://gerrit.cloudera.org:8080/23766
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This change allows modifying the format version table property
in ALTER TABLE CONVERT TO statements. It adds verification for
the property value too: only 1 or 2 is supported as of now.
Change-Id: Iaed207feb83a277a1c2f81dcf58c42f0721c0865
Reviewed-on: http://gerrit.cloudera.org:8080/23721
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Peter Rozsa <prozsa@cloudera.com>
Adds documentation for the catalog_partial_fetch_max_files configuration flag,
which limits the number of file descriptors returned in a catalog fetch.
Change-Id: I30b7a29ae78d97d15dd7f946d83f7535181f214e
Reviewed-on: http://gerrit.cloudera.org:8080/23676
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
The patch bumped up the arrow version to 15.0.0 and use
latest toolchain to fix the arrow jni loading issue for linux on
aarch64 environment.
Background:
We have fixed jni loading issue for aarch64 environment from
native toolchain side in IMPALA-14609. We also need to bump up
arrow version to 15.0.0 and use the toolchain to fix the issue.
Testing:
Built new toolchain and pass paimon test in aarch64
environment.
Change-Id: I7b8dd6ab43cf05b4339880ecec0d1f48e44ef294
Reviewed-on: http://gerrit.cloudera.org:8080/23756
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
This patch extends the SHOW PARTITIONS statement to allow an optional
WHERE clause that filters partitions based on partition column values.
The implementation adds support for various comparison operators,
IN lists, BETWEEN clauses, IS NULL, and logical AND/OR expressions
involving partition columns.
Non-partition columns, subqueries, and analytic expressions in the
WHERE clause are not allowed and will result in an analysis error.
New analyzer tests have been added to AnalyzeDDLTest#TestShowPartitions
to verify correct parsing, semantic validation, and error handling for
supported and unsupported cases.
Testing:
- Added new unit tests in AnalyzeDDLTest for valid and invalid WHERE
clause cases.
- Verified functional tests covering partition filtering behavior.
Change-Id: I2e2a14aabcea3fb17083d4ad6f87b7861113f89e
Reviewed-on: http://gerrit.cloudera.org:8080/23566
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
'impala.disableHmsSync'
FEATURE: Implement global 'disable_hms_sync_by_default' flag for event
processing. This change introduces a new catalogd startup flag,
`disable_hms_sync_by_default`, to simplify skipping/processing events.
Problem: Disabling event processing globally requires tedious process
of setting 'impala.disableHmsSync' property on every database and table,
especially if few specific tables requires sync up of events.
Solution: The new flag provides a global default for the
'impala.disableHmsSync' property.
Behavior:
- If `disable_hms_sync_by_default` is true (the intended default-off
state), event processing is skipped for all tables/databases unless
the property "impala.disableHmsSync"="false" is explicitly set.
- This allows users to easily keep event processing off by default
and opt-in specific databases or tables to start syncing.
- The check order is: table-property > db-property > global default.
- HMS polling remains independent and unaffected by this flag.
Change-Id: I4ee617aed48575502d9cf5cf2cbea6ec897d6839
Reviewed-on: http://gerrit.cloudera.org:8080/23487
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
The first IMPALA-14606 commit miss to setup Python 3 in fresh RHEL8
machine. This was not caught before because I test using downstream
jenkins and it reuse RHEL8 machine that previously setup with Python 2.
This patch fix the issue by skipping pip install argparse that broke the
script and run setup_python3 instead for RHEL8 machine.
Testing:
- Run full bootstrap_system.sh and buildall.sh in fresh RHEL8 machine.
Change-Id: I6df0a534175404fe96d32eeb1e7bf0aa9ca204cd
Reviewed-on: http://gerrit.cloudera.org:8080/23772
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Reviewed-by: Laszlo Gaal <laszlo.gaal@cloudera.com>
Tested-by: Riza Suminto <riza.suminto@cloudera.com>
This commit contains the simpler parts from
https://gerrit.cloudera.org/#/c/20602
This mainly means accessors for the header of the binary
format and bounding box check (st_envIntersects).
New tests for not yet covered functions / overloads are also added.
For details of the binary format see be/src/exprs/geo/shape-format.h
Differences from the PR above:
Only a subset of functions are added. The criteria was:
1. the native function must be fully compatible with the Java version*
2. must not rely on (de)serializing the full geometry
3. the function must be tested
1 implies 2 because (de)serialization is not implemented yet in
the original patch for >2d geometries, which would break compatibility
for the Java version for ZYZ/XYM/XYZM geometries.
*: there are 2 known differences:
1. NULL handling: the Java functions return error instead of NULL
when getting a NULL parameter
2. st_envIntersects() doesn't check if the SRID matches - the Java
library looks inconsistant about this
Because the native functions are fairly safe replacements for the Java
ones, they are always used when geospatial_library=HIVE_ESRI.
Change-Id: I0ff950a25320549290a83a3b1c31ce828dd68e3c
Reviewed-on: http://gerrit.cloudera.org:8080/23700
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Impala allows various Java versions to be selected for its build and
runtime environment when bin/bootstrap_system.sh is used to set up the
environment. Unfortunately this setup failed to affect the current Java
JRE and compiler tools on Red Hat Linux and compatibles (e.g. Rocky
Linux), because bootstrap_system.sh failed to set up the requested
version in the "alternatives" subsystem. The same failure was not
observed on Ubuntu versions, on that platform `update_java_alternatives`
was correctly run for the same purpose.
This patch adds calls to `alternatives` to set the JRE and JDK
environments to the requested version. This benefits automated test runs
in Impala's pre- and post-commit environments as well as individual
workstation setups.
Change-Id: I8972fb35b232830c6d8cf1125a7a8223547bd206
Reviewed-on: http://gerrit.cloudera.org:8080/23741
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This patch mainly implement the querying of paimon data table
through JNI based scanner.
Features implemented:
- support column pruning.
The partition pruning and predicate push down will be submitted
as the third part of the patch.
We implemented this by treating the paimon table as normal
unpartitioned table. When querying paimon table:
- PaimonScanNode will decide paimon splits need to be scanned,
and then transfer splits to BE do the jni-based scan operation.
- We also collect the required columns that need to be scanned,
and pass the columns to Scanner for column pruning. This is
implemented by passing the field ids of the columns to BE,
instead of column position to support schema evolution.
- In the original implementation, PaimonJniScanner will directly
pass paimon row object to BE, and call corresponding paimon row
field accessor, which is a java method to convert row fields to
impala row batch tuples. We find it is slow due to overhead of
JVM method calling.
To minimize the overhead, we refashioned the implementation,
the PaimonJniScanner will convert the paimon row batches to
arrow recordbatch, which stores data in offheap region of
impala JVM. And PaimonJniScanner will pass the arrow offheap
record batch memory pointer to the BE backend.
BE PaimonJniScanNode will directly read data from JVM offheap
region, and convert the arrow record batch to impala row batch.
The benchmark shows the later implementation is 2.x better
than the original implementation.
The lifecycle of arrow row batch is mainly like this:
the arrow row batch is generated in FE,and passed to BE.
After the record batch is imported to BE successfully,
BE will be in charge of freeing the row batch.
There are two free paths: the normal path, and the
exception path. For the normal path, when the arrow batch
is totally consumed by BE, BE will call jni to fetch the next arrow
batch. For this case, the arrow batch is freed automatically.
For the exceptional path, it happends when query is cancelled, or memory
failed to allocate. For these corner cases, arrow batch is freed in the
method close if it is not totally consumed by BE.
Current supported impala data types for query includes:
- BOOLEAN
- TINYINT
- SMALLINT
- INTEGER
- BIGINT
- FLOAT
- DOUBLE
- STRING
- DECIMAL(P,S)
- TIMESTAMP
- CHAR(N)
- VARCHAR(N)
- BINARY
- DATE
TODO:
- Patches pending submission:
- Support tpcds/tpch data-loading
for paimon data table.
- Virtual Column query support for querying
paimon data table.
- Query support with time travel.
- Query support for paimon meta tables.
- WIP:
- Snapshot incremental read.
- Complex type query support.
- Native paimon table scanner, instead of
jni based.
Testing:
- Create tests table in functional_schema_template.sql
- Add TestPaimonScannerWithLimit in test_scanners.py
- Add test_paimon_query in test_paimon.py.
- Already passed the tpcds/tpch test for paimon table, due to the
testing table data is currently generated by spark, and it is
not supported by impala now, we have to do this since hive
doesn't support generating paimon table for dynamic-partitioned
tables. we plan to submit a separate patch for tpcds/tpch data
loading and associated tpcds/tpch query tests.
- JVM Offheap memory leak tests, have run looped tpch tests for
1 day, no obvious offheap memory increase is observed,
offheap memory usage is within 10M.
Change-Id: Ie679a89a8cc21d52b583422336b9f747bdf37384
Reviewed-on: http://gerrit.cloudera.org:8080/23613
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Zoltan Borok-Nagy <boroknagyz@cloudera.com>
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tests in test_otel_trace.py that rely on queries being queued assume
a first long-running query will be started before a second query.
These tests are flaky most likely because the first long running
query is executed asynchronously immediately followed by executing a
second query. During slower builds (such as ASAN), the first query
may not be in running state before the second query is started.
This patch adds a check on the first query to ensure it is running
before starting the second query.
Change-Id: I9e77ec70b4668f0daed2ab9411f8f6c52ddccb2a
Reviewed-on: http://gerrit.cloudera.org:8080/23743
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
IMPALA-12709 Added support for hierarchical metastore event processing.
This commit enables hierarchical event processing by default.
hms_event_polling_interval_s can now be set to decimal value (eg: 0.5)
to support millisecond precision interval. Along with that others
configs can be fine tuned, such as:
num_db_event_executors: To set the number of database level event
executors.
num_table_event_executors_per_db_event_executor: To set the number of
table level event executors within a database event executor.
min_event_processor_idle_ms: To set the minimum time to retain idle db
processors and table processors.
max_outstanding_events_on_executors: To set the limit of maximum
outstanding events to process on event executors.
Testing:
- All the testing required to enable this flag is done in IMPALA-12709
and IMPALA-13801.
Change-Id: Ie9a28f863ef17456817e0a335215450e514b1f5b
Reviewed-on: http://gerrit.cloudera.org:8080/23687
Reviewed-by: <k.venureddy2103@gmail.com>
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This patch fix heap-use-after-free issue around
HdfsFsCache::GetConnection. The issue is resolved by changing copy
access to read-only access of HdfsConnOptions parameter entries.
Testing:
- Pass tmp-file-mgr-test in ASAN build.
Change-Id: I23ae03bf82191cd3cd99f8d4c7cbd99daaa0cfe8
Reviewed-on: http://gerrit.cloudera.org:8080/23742
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
When the environment variable USE_APACHE_HIVE is set to true, build
Impala for adapting to Apache Hive 3.x. In order to better distinguish it
from Apache Hive 2.x later, rename USE_APACHE_HIVE to USE_APACHE_HIVE_3.
Additionally, to facilitate referencing different versions of the Hive
MetastoreShim, the major version of Hive has been added to the environment
variable IMPALA_HIVE_DIST_TYPE.
Change-Id: I11b5fe1604b6fc34469fb357c98784b7ad88574d
Reviewed-on: http://gerrit.cloudera.org:8080/21724
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
IMPALA-12893 upgrade CDP_BUILD_NUMBER=71942734 upgrade Ozone version to
1.4.0.7.3.1.500-182. This newer Ozone version does not include
WAREHOUSE_PREFIX anymore in its trash path.
This patch fix the broken tests in test_ddl.py by updating the expected
trash path.
Testing:
Run and pass metadata/test_ddl.py in Ozone environment.
Change-Id: If1271a399d4eb82fed9b073b99d9a7b2c18a03b1
Reviewed-on: http://gerrit.cloudera.org:8080/23734
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
test_show_create_table_with_stats is flaky due to inconsistent metadata
is not handled/retried correctly in coordinator side. This patch deflake
it by retrying if InconsistentMetadataFetchException is caught.
This patch also fix some flake8 warnings in test_show_create_table.py,
including unused 'vector' parameter in several tests.
Testing:
Loop and pass test_show_create_table_with_stats 10 times.
Change-Id: I397b9502d92bfd756929be8e851661fd9246dd5e
Reviewed-on: http://gerrit.cloudera.org:8080/23728
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
IMPALA-14569 introduced a test that asserts for a profile row like
'HDFS partitions' and it's possible for test environments to run on a
different storage system. This change omits the storage type from the
row_regex.
Change-Id: If9b223f2be2dfe7be8724423fefdfb56ffeeba6e
Reviewed-on: http://gerrit.cloudera.org:8080/23727
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tested-by: Riza Suminto <riza.suminto@cloudera.com>
This change introduces a utility method FormatPermissions() that
converts mode_t permission bits into a human-readable string
(e.g., "drwxrwxrwt"). It correctly handles file type indicators,
owner/group/other read-write-execute bits, and special bits
such as setuid, setgid, and sticky.
This improves log readability and debugging for file metadata-related
operations by providing consistent, ls-style permission formatting.
Testing:
- Added unit tests validating permission string output for:
- Regular files, directories, symlinks, sockets
- All rwx combinations for user/group/other
- setuid, setgid, and sticky bit behavior
Change-Id: Ib53dbecd5c202e33b6e3b5cd3a372a77d8b1703a
Reviewed-on: http://gerrit.cloudera.org:8080/23714
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Tested-by: Michael Smith <michael.smith@cloudera.com>
This fixes an IllegalStateException in HdfsPartitionPruner when
evaluating 'IN' predicates whose consist of two compatible types, for
example DATE and STRING: date_col in (<date as string>).
Previously, 'canEvalUsingPartitionMd' did not check if the slot type
matched the literal type. This caused the frontend to attempt invalid
comparisons via 'LiteralExpr.compareTo', leading to
IllegalStateException or incorrect pruning.
The fix ensures 'canEvalUsingPartitionMd' returns false on type
mismatches, deferring evaluation to the backend where proper casting
occurs.
Testing:
- Added regression test in hdfs-partition-pruning.test.
Change-Id: Idc226a628c8df559329a060cb963b81e27e21eda
Reviewed-on: http://gerrit.cloudera.org:8080/23706
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
The code in span-manager.cc contains aggressive DCHECKS that rely on
the query lifecycle to be deterministic. In reality, the query
lifecycle is not completely deterministic due to multiple threads
being involved in execution, result retrieval, query shutdown, etc.
On debug builds only, a new flag named, otel_trace_exhaustive_dchecks
will be available with a default of 'false'. If set to 'true', then
optional DCHECKs will be enabled in the SpanManager class to enable
identification of edge cases where the query lifecycle proceeds in an
unexpected way.
The DCHECKs that are controlled by the new flag are those that rely
on a specific ordering of start/end child span and add child span
event calls.
Change-Id: Id6507f3f0e23ecf7c2bece9a6b6c2d86bfac1e57
Reviewed-on: http://gerrit.cloudera.org:8080/23518
Reviewed-by: Michael Smith <michael.smith@cloudera.com>
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
In some cases users delete files directly from storage without
going through the Iceberg API, e.g. they remove old partitions.
This corrupts the table, and makes queries that try to read the
missing files fail.
This change introduces a repair statement that deletes the
dangling references of missing files from the metadata.
Note that the table cannot be repaired if there are missing
delete files because Iceberg's DeleteFiles API which is used
to execute the operation allows removing only data files.
Testing:
- E2E
- HDFS
- S3, Ozone
- analysis
Change-Id: I514403acaa3b8c0a7b2581d676b82474d846d38e
Reviewed-on: http://gerrit.cloudera.org:8080/23512
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Fixes several issues with the OpenTelemetry tracing startup flags:
1. otel_trace_beeswax -- Removes this hidden flag which enabled
tracing of queries submitted over Beeswax. Since this protocol is
deprecated and no tests assert the traces generated by Beeswax
queries, this flag was removed to eliminate an extra check when
determining if OpenTelemetry tracing should be enabled.
2. otel_trace_tls_minimum_version -- Fixes parsing of this flag's
value. This flag is in the format "tlsv1.2" or "tlsv1.3", but the
OpenTelemetry C++ SDK expects the minimum TLS version to be in the
format "1.2" or "1.3". The code now removes the "tlsv" prefix before
passing the value to the OpenTelemetry C++ SDK.
3. otel_trace_tls_insecure_skip_verify -- Fixes the guidance to only
set this flag to true in dev/testing.
Adds ctest tests for the functions that configure the TraceProvider
singleton to ensure startup flags are correctly parsed and applied.
Modifies the http_exporter_config and init_otel_tracer function
signatures in otel.cc to return the actual object they create instead
of a Status since these functions only ever returned OK.
Updates the OpenTelemetry collector docker-compose file to support
the collector receiving traces over both HTTP and HTTPS. This setup
is used to manually smoke test the integration from Impala to an
OpenTelemetry collector.
Change-Id: Ie321fa37c0fd260f783dc6cf47924d53a06d82ea
Reviewed-on: http://gerrit.cloudera.org:8080/23440
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
HiveCatalog does not include format-version for Iceberg tables in the
table's parameters, therefore the output of SHOW CREATE TABLE may not
replicate the original table.
This patch makes sure to add it to both the SHOW CREATE TABLE and
DESCRIBE FORMATTED/EXTENDED output.
Additionally, adds ICEBERG_DEFAULT_FORMAT_VERSION variable to E2E
tests, deducting from IMPALA_ICEBERG_VERSION environment variable.
If Iceberg version is at least 1.4, default format-version is 2, before
1.4 it's 1. This way tests can work with multiple Iceberg versions.
Testing:
* updated show-create-table.test and show-create-table-with-stats.test
for Iceberg tables
* added format-version checks to multiple DESCRIBE FORMATTED tests
Change-Id: I991edf408b24fa73e8a8abe64ac24929aeb8e2f8
Reviewed-on: http://gerrit.cloudera.org:8080/23514
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
The main motivation is to evaluate expensive geospatial
functions (which are Java functions) last in predicates.
Java functions have a major overhead anyway from the JNI
call, so bumping all Java function costs seems beneficial.
Note that currently geospatial functions are the only
built-in Java functions.
Change-Id: I11d1652d76092ec60af18a33502dacc25b284fcc
Reviewed-on: http://gerrit.cloudera.org:8080/22733
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This adds the java/impala-package Maven project to make it easier
to ship / test the Calcite planner. impala-package has a dependency
on impala-frontend and calcite-planner, so its classpath requires
no extra work when constructing the classpath.
An additional cleanup is that this no longer puts the
impala-frontend-*-tests.jar on the classpath by default. This requires
updating the query event hooks test, as it relies on that jar being
present.
This does not change the default value for the use_calcite_planner
query option, so there is no change in behavior.
Testing:
- Ran a core job
- Built docker images and OS packages locally
Change-Id: I81dec2a5b59e279229a735c8bb1a23c77111a793
Reviewed-on: http://gerrit.cloudera.org:8080/23497
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
When the --use_calcite_planner=true option is set at the server level,
the queries will no longer go through CalciteJniFrontend. Instead, they
will go through the regular JniFrontend, which is the path that is used
when the query option for "use_calcite_planner" is set.
The CalciteJniFrontend will be removed in a later commit.
This commit also enables fallback to the original planner when an unsupported
feature exception is thrown. This needed to be added to allow the tests to run
properly. During initial database load, there are queries that access complex
columns which throws the unsupported exception.
Change-Id: I732516ca8f7ea64f73484efd67071910c9b62c8f
Reviewed-on: http://gerrit.cloudera.org:8080/23523
Reviewed-by: Steve Carlin <scarlin@cloudera.com>
Tested-by: Steve Carlin <scarlin@cloudera.com>
Redhat 9 environments recently switched to OpenSSL 3.5.1. On those
machines, the Kudu minicluster fails to start up with CSR signature
verification error. KUDU-3716 fixed this issue.
This patch update Toolchain and Kudu version to pick up KUDU-3716.
Testing:
Pass data loading with in Redhat 9.
Change-Id: I7262267939a9f08650af85443240950afbb3323f
Reviewed-on: http://gerrit.cloudera.org:8080/23697
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This modifies bin/single_node_perf_run.py to stop using the sh
python package. It replaces sh with calls to subprocess. It
stops installing sh for both the Python 2 and 3 virtualenvs.
Testing:
- Ran perf-AB-test job with it and examined the logs
Change-Id: Ic5f9316a5d83c5c0dc37d4a94c55b6a655765fe3
Reviewed-on: http://gerrit.cloudera.org:8080/23600
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Reviewed-by: Jason Fehr <jfehr@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
On python 3, when Impyla receives a result with a string that is
not valid UTF-8, it returns that as bytes. TPC-DS Q30 on scale 20
has a result that contains invalid UTF-8, so bin/run-workload.py
can fail while trying to dump this to JSON.
This modifies CustomJSONEncoder to handle serializing bytes by
converting it to a string with invalid unicode handled with
backslashes.
Testing:
- Ran bin/run-workload.py against TPC-DS scale 20
Change-Id: Ibe31c656de4fc65f8580c7b3b49bf655b8a5ecea
Reviewed-on: http://gerrit.cloudera.org:8080/23602
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Reviewed-by: Jason Fehr <jfehr@cloudera.com>
Tested-by: Joe McDonnell <joemcdonnell@cloudera.com>
This patch adds benchmarks to the Byte Stream Split encoding. It
compares different ways to use the decoder.
I added benchmarks for the following comparisons:
* Compile VS Runtime initialized decoder
* Float VS Int VS Double VS Long VS 6 and 11 byte size types
* Repeating VS Sequential VS Random ordered data
* Decoding one by one VS in batch VS with stride (!= byte_size)
* Small VS Medium (10x small) VS Large (100x small) stride
Conclusions:
* Passing the byte size as a template parameter is almost 5 times
as fast as passing it in the constructor.
* The size of the type heavily influences the speed
* The data variation doesn't influence the speed at all
* Reading values in batch is much faster than one-by-one
* The stride sizes have a small influence on the speed
For more details and graphs, go to
https://docs.google.com/spreadsheets/d/129LwvR6gpZInlRhlVWktn6Haugwo_fnloAAYfI0Qp2s
Change-Id: I708af625348b0643aa3f37525b8a6e74f0c47057
Reviewed-on: http://gerrit.cloudera.org:8080/23401
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This commit is a fix on top of IMPALA-14405 for the Calcite
planner. The original commit matches column names from the
expression in the select clause.
For instance, if the query is "select 1 + 1", the label in
impala-shell will be "1 + 1". It accomplished this by
retrieving the string from the SqlNode object through the
MySql dialect.
However, when the expression doesn't succeed in the MySql
dialect, an AssertionError gets thrown, causing the query to
fail. We don't want the query to fail, we just want to go
back to using the Calcite expression, e.g. EXPR$0. This
occurred with this specific query:
"select timestamp_col + interval 3 nanoseconds"
So now the exception is caught and the default label name
is used. Eventually we should try to match what Impala has,
but this is a harder problem to fix.
Change-Id: I6c4d76a25fb2486eb1ef19485bce7888d45d282f
Reviewed-on: http://gerrit.cloudera.org:8080/23665
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Reviewed-by: Steve Carlin <scarlin@cloudera.com>
Currently Hive ACID stress tests run with "core" exploration strategy.
It was important to get instant feedback about this feature when this
was actively developed. Since then development activity around Hive ACID
decreased significantly, as focus shifted towards Iceberg.
This patch moves Hive ACID tests to exhaustive tests where they will
be still executed regularly, but won't slow down pre-commit tests.
Change-Id: Id7181fea62e2e3f8fcf7897a70e54a1708ef3f3e
Reviewed-on: http://gerrit.cloudera.org:8080/23677
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>