Support row_regex and other lines for the subset and superset verifiers,
which previously assumed that lines in the actual and expected had to
match exactly.
Use in test_stats_extrapolation to make the test more robust to
irrelevant changes in the explain plan.
Testing:
Manually modified a superset and a subset test to check that tests fail
as expected.
Change-Id: Ia7a28d421c8e7cd84b14d07fcb71b76449156409
Reviewed-on: http://gerrit.cloudera.org:8080/10155
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Thrift 0.9.3 implements "ostream& operator<<(ostream&, T)" for thrift
data types while impala did the same to enums and special types
including TNetworkAddress and TUniqueId. To prepare for the upgrade of
thrift 0.9.3, this patch renames these impala defined functions. In the
absence of operator<<, assertion macros like DCHECK_EQ can no longer be
used on non-enum thrift defined types.
Change-Id: I9c303997411237e988ef960157f781776f6fcb60
Reviewed-on: http://gerrit.cloudera.org:8080/9168
Reviewed-by: Tianyi Wang <twang@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Adds MAX_MEM_ESTIMATE_FOR_ADMISSION query option, which takes
effect if and only if
* Memory-based admission control is enabled for the pool
* No mem_limit is set (i.e. best practices are not being followed)
In that case min(MAX_MEM_ESTIMATE_FOR_ADMISSION, mem_estimate)
is used for admission control instead of mem_estimate.
This provides a way to override the planner's estimate if
it happens to be incorrect and are preventing the query from
running. Setting MEM_LIMIT is usually a better alternative
but sometimes it is not feasible to set MEM_LIMIT for each
individual query.
Testing:
Added an admission control test to verify that query option allows
queries with high estimates to run.
Also tested manually on a minicluster started with:
start-impala-cluster.py --impalad_args='-vmodule admission-controller=3 \
-default_pool_mem_limit 12884901888'
Change-Id: Ia5fc32a507ad0f00f564dfe4f954a829ac55d14e
Reviewed-on: http://gerrit.cloudera.org:8080/10058
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
IMPALA-4794 changed the distinct aggregation behavior to shuffling by
both grouping exprs and the distinct expr. It's slower in queries
where the NDVs of grouping exprs are high and data are uniformly
distributed among groups. This patch adds a query option controlling
this behavior, letting users switch to the old plan.
Change-Id: Icb4b4576fb29edd62cf4b4ba0719c0e0a2a5a8dc
Reviewed-on: http://gerrit.cloudera.org:8080/9949
Reviewed-by: Tianyi Wang <twang@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This patch integrates the orc library into Impala and implements
HdfsOrcScanner as a middle layer between them. The HdfsOrcScanner
supplies input needed from the orc-reader, tracks memory consumption of
the reader and transfers the reader's output (orc::ColumnVectorBatch)
into impala::RowBatch. The ORC version we used is release-1.4.3.
A startup option --enable_orc_scanner is added for this feature. It's
set to true by default. Setting it to false will fail queries on ORC
tables.
Currently, we only support reading primitive types. Writing into ORC
table has not been supported neither.
Tests
- Most of the end-to-end tests can run on ORC format.
- Add tpcds, tpch tests for ORC.
- Add some ORC specific tests.
- Haven't enabled test_scanner_fuzz for ORC yet, since the ORC library
is not robust for corrupt files (ORC-315).
Change-Id: Ia7b6ae4ce3b9ee8125b21993702faa87537790a4
Reviewed-on: http://gerrit.cloudera.org:8080/9134
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Before this commit it was quite random which DDL oprations
returned a result set and which didn't.
With this commit, every DDL operations return a summary of
its execution. They declare their result set schema in
Frontend.java, and provide the summary in CalatogOpExecutor.java.
Updated the tests according to the new behavior.
Change-Id: Ic542fb8e49e850052416ac663ee329ee3974e3b9
Reviewed-on: http://gerrit.cloudera.org:8080/9090
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
This is the compatibility-breaking part of Jinchul Kim's change
to add additional units. To support nanoseconds we need to
widen the output type of these functions. We also change
the meaning of "milliseconds" to include the seconds component.
Cherry-picks: not for 2.x
Change-Id: I42d83712d9bb3a4900bec38a9c009dcf2a1fe019
Reviewed-on: http://gerrit.cloudera.org:8080/9957
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
The underlying issue in IMPALA-6338 causes successful queries that are
cancelled internally due to all results having been returned to, in
rare cases, have info missing from the profile. This has caused flaky
tests but has low impact on users, and unfortunately with the current
query lifecycle logic in the coordinator, there is no simple solution.
There is ongoing work to improve query lifecycle logic in the
coordinator holistically, see IMPALA-5384. This work will eventually
address the underlying cause of IMPALA-6338. Until then, we disable
the tests that have been flaky.
Change-Id: Ie30b88fb8fb7780fc3a7153c05fdc3606145ce35
Reviewed-on: http://gerrit.cloudera.org:8080/9822
Reviewed-by: Thomas Tauber-Marshall <tmarshall@cloudera.com>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Fixes a bug that introduced default initialized values in the set data
structure used to check for set membership that can cause wrong results.
Testing:
Added a test case that checks for the same.
Change-Id: I7e776dbcb7ee4a9b64e1295134a27d332f5415b6
Reviewed-on: http://gerrit.cloudera.org:8080/9891
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
This patch fixes the NullPointerException in SHOW CREATE TABLE for HBase
tables.
Testing:
- Moved the content of back hbase-show-create-table.test to
show-create-table.test
- Ran show-create-table end-to-end tests
Change-Id: Ibe018313168fac5dcbd80be9a8f28b71a2c0389b
Reviewed-on: http://gerrit.cloudera.org:8080/9884
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
This patch removes restriction on creating a function with the same name
as the built-in function. The reason for lifting the restriction is to
avoid a name clash when introducing new built-in functions. The patch
also fixes some inconsistent behavior when creating or dropping a function
when the name specified is fully-qualified or not.
Refer to the below tables for more information.
Create function:
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
| FQ Name | Built-in DB | Function Name | Existing Behavior | New Behavior |
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
| Yes | Yes | Same as built-in | Same name exception | Cannot modify system database |
| Yes | Yes | Different than built-in | Cannot modify system database | Cannot modify system database |
| Yes | No | Same as built-in | Function created | Function created |
| Yes | No | Different than built-in | Function created | Function created |
| No | Yes | Same as built-in | Same name exception | Cannot modify system database |
| No | Yes | Different than built-in | Cannot modify system database | Cannot modify system database |
| No | No | Same as built-in | Same name exception | Function created |
| No | No | Different than built-in | Function created | Function created |
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
Drop function:
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
| FQ Name | Built-in DB | Function Name | Existing Behavior | New Behavior |
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
| Yes | Yes | Same as built-in | Cannot modify system database | Cannot modify system database |
| Yes | Yes | Different than built-in | Cannot modify system database | Cannot modify system database |
| Yes | No | Same as built-in | Function dropped | Function dropped |
| Yes | No | Different than built-in | Function dropped | Function dropped |
| No | Yes | Same as built-in | Cannot modify system database | Cannot modify system database |
| No | Yes | Different than built-in | Cannot modify system database | Cannot modify system database |
| No | No | Same as built-in | Cannot modify system database | Function dropped |
| No | No | Different than built-in | Function dropped | Function dropped |
+---------+-------------+-------------------------+-------------------------------+-------------------------------+
Select function (no new behavior):
+---------+-------------+-------------------------+--------------------------------------------------------+
| FQ Name | Built-in DB | Function Name | Behavior |
+---------+-------------+-------------------------+--------------------------------------------------------+
| Yes | Yes | Same as built-in | Function in the specified database (built-in) executed |
| Yes | Yes | Different than built-in | Unknown function exception |
| Yes | No | Same as built-in | Function in the specified database executed |
| Yes | No | Different than built-in | Function in the specified database executed |
| No | Yes | Same as built-in | Built-in function executed |
| No | Yes | Different than built-in | Unknown function exception |
| No | No | Same as built-in | Built-in function executed |
| No | No | Different than built-in | Function in the current database executed |
+---------+-------------+-------------------------+--------------------------------------------------------+
Testing:
- Ran front-end tests
- Added end-to-end DDL function tests
Cherry-picks: not for 2.x
Change-Id: Ic30df56ac276970116715c14454a5a2477b185fa
Reviewed-on: http://gerrit.cloudera.org:8080/9800
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
The patch fixes issues with executing ALTER TABLE SET statements when
there are no matching partitions.
The patch also removes incorrect precondition i.e.
(partitionSet == null || !partitionSet.isEmpty()) in ALTER TABLE SET
statements because a partitionSet can be null when PARTITION is not
specified in the ALTER TABLE SET statement and partitionSet can be
empty when there is no matching partition. For example:
Matching partitions (partitionSet != null && !partitionSet.isEmpty()):
> alter table functional.alltypesagg partition(year=2009, month=1)
set fileformat parquet;
No matching partitions (partitionSet != null && partitionSet.isEmpty()):
> alter table functional.alltypesagg partition(year=2009, month=1)
set fileformat parquet;
No partition specified (partitionSet == null):
> alter table functional.alltypesagg set fileformat parquet;
Testing:
- Added a new test
- Ran all front-end tests
Change-Id: I793e827d5cf5b7986bd150dd9706df58da3417f3
Reviewed-on: http://gerrit.cloudera.org:8080/9819
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
A concern was brought up that Impala might not handle kudu master
addresses containing whitespace correctly. Turns out that the Kudu
client takes care of stripping whitespace, so it works, but it would
be good to have a test to ensure it continues to work.
Change-Id: I1857b8dbcb5af66d69f7620368cd3b9b85ae7576
Reviewed-on: http://gerrit.cloudera.org:8080/9876
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Before this patch, the output type of round() ceil() floor() trunc() was
not always the same as the input type. It was also inconsistent in
general. For example, round(double) returned an integer, but
round(double, int) returned a double.
After looking at other database systems, we decided that the guideline
should be that the output type should be the same as the input type. In
this patch, we change the behavior of the previously mentioned functions
so that if a double is given then a double is returned.
We also modify the rounding behavior to always round away from zero.
Before, we were rounding towards positive infinity in some cases.
Testinging:
- Updated tests
- Ran an exhaustive build which passed.
Cherry-picks: not for 2.x
Change-Id: I77541678012edab70b182378b11ca8753be53f97
Reviewed-on: http://gerrit.cloudera.org:8080/9346
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Adds support for building against two sets of Hadoop ecosystem
components. The control variable is IMPALA_MINICLUSTER_PROFILE_OVERRIDE,
which can either be set to 2 (for Hadoop 2, Hive 1, and so on) or 3 (for
Hadoop 3, Hive 2, and so on).
We intend (in a trivial follow-on change soon) to make 3 the new default
and to explicitly deprecate 2, but this change only does not switch the
default yet. We support both to facilitate a smoother transition, but
support will be removed soon in the Impala 3.x line.
The switch is done at build time, following the pattern from IMPALA-5184
(build fe against both Hive 1 & 2 APIs). Switching back and forth
requires running 'cmake' again. Doing this at build-time avoids
complicating the Java code with classloader configuration.
There are relatively few incompatible APIs. This implementation
encapsulates that by extracting some Java code into
fe/src/compat-minicluminicluster-profile-{2,3}. (This follows the
pattern established by IMPALA-5184, but, to avoid a proliferation
of directories, I've moved the Hive files into the same tree.)
pattern from IMPALA-5184 (build fe against both Hive 1 & 2 APIs). I
consolidated the Hive changes into the same directory structure.
For Maven, I introduced Maven "profiles" to handle the two cases where
the dependencies (and exclusions) differ. These are driven by the
$IMPALA_MINICLUSTER_PROFILE environment variable.
For Sentry, exception class names changed. We work around this by adding
"isSentry...(Exception)" methods with two different implementations.
Sentry is also doing some odd shading, whereby some exceptions are
"sentry.org.apache.sentry..."; we handle both. Similarly, the mechanism
to create a SentryAuthProvider is slightly different. The easiest way to
see the differences is to run:
diff -u fe/src/compat-minicluster-profile-{2,3}/java/org/apache/impala/util/SentryUtil.java
diff -u fe/src/compat-minicluster-profile-{2,3}/java/org/apache/impala/authorization/SentryAuthProvider.java
The Sentry work is based on a change by Zach Amsden.
In addition, we recently added an explicit "refresh" permission. In
Sentry 2, this required creating an ImpalaPrivilegeModel to capture
that. It's a slight customization of Hive's equivalent class.
For Parquet, the difference is even more mechanical. The package names
gone from "parquet" to "org.apache.parquet". The affected code
was extracted into ParquetHelper, but only one copy exists. The second
copy is generated at build-time using sed.
In the rare cases where we need to behave differently at runtime,
MiniclusterProfile.MINICLUSTER_PROFILE is a class which encapsulates
what version we were built aginst. One of the cases is the results
expected by various frontend tests. I avoided the issue by translating
one error string into another, which handled the diversion in one place,
rather than complicating the several locations which look for "No
FileSystem for scheme..." errors.
The HBase APIs we use for splitting regions at test time changed.
This patch includes a re-write of that code for the new APIs. This
piece was contributed by Zach Amsden.
To work with newer versions of dependencies, I updated the version of
httpcomponents.core we use to 4.4.9.
We (Thomas Tauber-Marshall and I) uploaded new Hadoop/Hive/Sentry/HBase
binaries to s3://native-toolchain, and amended the shell scripts to
launch the right things. There are minor mechanical differences. Some
of this was based on earlier work by Joe McDonnell and Zach Amsden.
Hive's logging is changed in Hive 2, necessitating creating a
log4j2.properties template and using it appropriately. Furthermore,
Hadoop3's new shell script re-writes do a certain amount of classpath
de-duplication, causing some issues with locating the relevant logging
configurations. Accomodations exist in the code to deal with that.
parquet-filtering.test was updated to turn off stats filtering. Older
Hive didn't write Parquet statistics, but newer Hive does. By turning
off stats filtering, we test what the test had intended to test.
For views-compatibility.test, it seems that Hive 2 has fixed certain
bugs that we were testing for in Hive. I've added a
HIVE=SUCCESS_PROFILE_3_ONLY mechanism to capture that.
For AuthorizationTest, different hive versions show slightly different
things for extended output.
To facilitate easier reviewing, the following files are 100% renames as identified by git; nothing
to see here.
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetCatalogsReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetColumnsReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetFunctionsReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetInfoReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetSchemasReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/hive/service/rpc/thrift/TGetTablesReq.java (100%)
rename fe/src/{compat-hive-1 => compat-minicluster-profile-2}/java/org/apache/impala/compat/MetastoreShim.java (100%)
rename fe/src/{compat-hive-2 => compat-minicluster-profile-3}/java/org/apache/impala/compat/MetastoreShim.java (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/hadoop/conf/kms-acls.xml.tmpl (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/hadoop/conf/kms-site.xml.tmpl (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/hadoop/conf/yarn-site.xml.tmpl (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/init.d/kudu-common (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/init.d/kudu-master (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/init.d/kudu-tserver (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/kudu/master.conf.tmpl (100%)
rename testdata/cluster/node_templates/{cdh5 => common}/etc/kudu/tserver.conf.tmpl (100%)
CreateTableLikeFileStmt had a chunk of code moved to ParquetHelper.java. This
was done manually, but without changing anything except what Java required in
terms of accessibility and boilerplate.
rewrite fe/src/main/java/org/apache/impala/analysis/CreateTableLikeFileStmt.java (80%)
copy fe/src/{main/java/org/apache/impala/analysis/CreateTableLikeFileStmt.java => compat-minicluster-profile-3/java/org/apache/impala/analysis/ParquetHelper.java} (77%)
Testing: Ran core & exhaustive tests with both profiles.
Cherry-picks: not for 2.x.
Change-Id: I7a2ab50331986c7394c2bbfd6c865232bca975f7
Reviewed-on: http://gerrit.cloudera.org:8080/9716
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Impala already supported RLE encoding for levels and dictionary pages, so
the only task was to integrate it into BoolColumnReader.
A new benchmark, rle-benchmark.cc is added to test the speed of RLE
decoding for different bit widths and run lengths.
There might be a small performance impact on PLAIN encoded booleans,
because of the additional branch when the cache of BoolColumnReader is
filled. As the cache size is 128, I considered this to be outside the
"hot loop".
Testing:
As Impala cannot write RLE encoded bool columns at the moment, parquet-mr
was used to create a test file, testdata/data/rle_encoded_bool.parquet
tests/query_test/test_scanners.py#test_rle_encoded_bools creates a table
that uses this file, and tries to query from it.
Change-Id: I4644bf8cf5d2b7238b05076407fbf78ab5d2c14f
Reviewed-on: http://gerrit.cloudera.org:8080/9403
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Currently when using a SET_LOOKUP strategy for in-predicates in impala
we use a std:set object for checking membership. This patch takes a
hybrid approach based on benchmarking results and uses boost::flat_set
for int, big int, and float datatypes and boost::unordered_set for the
rest (tiny int, small int, double, string, timestamp, decimal).
The intent of this change is to fix a regression when upgrading the
toolchain to use LLVM 5.0.1 (IMPALA-5980).
Performance:
Ran a query for each data type with a large in predicate containing
500 elements on a single node with mt_dop set to 1.
+-----------+---------------+----------+---------------+----------+
| Data Type | Llvm 3 hybrid | Llvm 3 | Llvm 5 hybrid | Llvm 5 |
+-----------+---------------+----------+---------------+----------+
| Table used: tpch100_parquet.lineitem |
+-----------+---------------+----------+--------------+-----------+
| big int | 17s782ms | 13s941ms | 13s201ms | 25s604ms |
| string | 40s750ms | 64s | 40s723ms | 73s |
| decimal | 13s929ms | 22s272ms | 13s710ms | 34s338ms |
| int | 19s368ms | 11s308ms | 9s169ms | 15s254ms |
+-----------+---------------+----------+--------------+-----------+
| Table used: alltypes with 33638400 rows |
+-----------+---------------+----------+--------------+-----------+
| double | 5s726ms | 5s894ms | 5s595ms | 6s592ms |
| small int | 4s776ms | 5s057ms | 4s740ms | 5s358ms |
| float | 7s223ms | 6s397ms | 6s287ms | 6s926ms |
+-----------+---------------+----------+---------------+----------+
Also added a targeted perf query that uses a large in-predicate
over a decimal column.
Testing:
- Ran expr-test and test_exprs successfully.
Change-Id: Ifd1627d779d10a16468cc3c2d0bc26a497e048df
Reviewed-on: http://gerrit.cloudera.org:8080/9570
Reviewed-by: Bikramjeet Vig <bikramjeet.vig@cloudera.com>
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
The DCHECK was only valid if the Parquet file metadata is internally
consistent, with the number of values reported by the metadata
matching the number of encoded levels.
The DCHECK was intended to directly detect misuse of the RleBatchDecoder
interface, which would lead to incorrect results. However, our other
test coverage for reading Parquet files is sufficient to test the
correctness of level decoding.
Testing:
Added a minimal corrupt test file that reproduces the issue.
Change-Id: Idd6e09f8c8cca8991be5b5b379f6420adaa97daa
Reviewed-on: http://gerrit.cloudera.org:8080/9556
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The bug was that the SortInfo of analytics was given
ordering exprs that were not fully resolved against their
input (e.g. inline views were not resolved).
As a result, the SortInfo logic did not materialize exprs
like rand() coming from inline views.
The fix is to pass fully resolved exprs to the analytic
SortInfo, and then the existing materialization logic
properly handles non-deterministic built-ins and UDFs.
The code around sort generation was rather convoluted
and difficult to understand. I overhauled SortInfo to
unify the different uses of it under a common codepath
After that cleanup, the fix for this issue was trivial.
Testing:
- Locally ran planner tests
- Locally ran analytic EE tests in test_queries.py
- Core/hdfs run passed
Change-Id: Id2b3f4e5e3f1fd441a63160db3c703c432fbb072
Reviewed-on: http://gerrit.cloudera.org:8080/9631
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Before Kudu supported DECIMAL columns the TPCDS and TPCH
columns were djusted to use DOUBLE in place of DECIMAL. This
patch undoes that change now that Kudu supports DECIMAL.
Testing:
- Updated concurrent_select.py
- Updated test_tpch_queries.py
- Excersized by the Kudu planner tests
Change-Id: I2f7e4464dc6705cadd610a82c459390a9c0dfe4f
Reviewed-on: http://gerrit.cloudera.org:8080/9484
Reviewed-by: Thomas Tauber-Marshall <tmarshall@cloudera.com>
Tested-by: Impala Public Jenkins
This change allows casting of a string in 'lazy' date/time
format to timestamp. The supported lazy date formats are:
yyyy-[M]M-[d]d
yyyy-[M]M-[d]d [H]H:[m]m:[s]s[.SSSSSSSSS]
[H]H:[m]m:[s]s[.SSSSSSSSS]
We will incur a SCAN performance penalty (approximately 1/2
TotalReadThroughput) when the string is in one of these
lazy date/time format.
Testing:
Benchmarked the performance consequence by executing this SQL on
a private build over 3.8 billion rows:
select min(cast (time_string as timestamp)) from private.impala_5315
Added tests for valid and invalid date/time format strings
in expr-test.cc to be inline with existing tests for CAST() function.
Added end-to-end tests into exprs.test and
select-lazy-timestamp.test to exercise the new function within
the context of a query.
Added tests to exercise the leading and trailing white space trimming
behaviour in default and lazy date/time string format (IMPALA-6630).
Change-Id: Ib9a184a09d7e7783f04d47588537612c2ecec28f
Reviewed-on: http://gerrit.cloudera.org:8080/7009
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
This patch enables pushing scan predicates on
DECIMAL columns down to Kudu.
Testing:
- Added Planner decimal predicate test to kudu.test
- Added Planner decimal in-list test to kudu-selectivity.test
Change-Id: I2569a9e1d58f1c58884d58633d46348364888ed7
Reviewed-on: http://gerrit.cloudera.org:8080/9578
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
We've observed empirically that giving Impala 80% of system memory
doesn't leave enough room for the minicluster and ASAN overhead, leading
to the OOM killer striking during test runs (sometimes). This commit
reduces the threshold to 70%.
This commit also reduces the memory usage of semi-joins-exhaustive.test
by roughly halving the number of records it deals with. This was
necessary for tests to pass on a machine with 32GB of RAM.
Testing: I've run the ASAN build (more) happily with this change.
I've run exhaustive tests on a 32GB machine.
Change-Id: Iabca7a95560bd27c2de2b0a147ee9a3c45199db7
Reviewed-on: http://gerrit.cloudera.org:8080/9395
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Quick fix of Parquet write path until the Parquet community
agrees on the ordering of floating point numbers.
The behavior follows the way fmax()/fmin() works, ie. Impala
will only write NaN into the stats when all the values are NaNs.
This behavior is aligned with the quick fix of Parquet-CPP.
Added e2e tests as well.
Change-Id: I3957806948f7c661af4be5495f2ec92d1e9fc9d6
Reviewed-on: http://gerrit.cloudera.org:8080/9381
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Impala Public Jenkins
IMPALA-6592 revealed a gap in test coverage for files with
invalid/unsupported Parquet codecs. This adds a test that reproduces the
bug that was present in my IMPALA-4835 patch. master is unaffected by
this bug.
I also hid the conversion tables and made the conversion go through
functions that validate the enum values, to make it easier to track down
problems like this in the future.
Testing:
Ran exhaustive tests.
Change-Id: I1502ea7b7f39aa09f0ed2677e84219b37c64c416
Reviewed-on: http://gerrit.cloudera.org:8080/9500
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The bug is that the right child of a blocking join node could be closed
before the builder if an error was encountered when sending a batch to
the sink. This hits a DCHECK because Buffers owned by the sink may still
be accounted against the child node.
Testing:
Added the test that originally triggered the problem. It reproduced the
failure when based on the IMPALA-4835 patch, but I can't reproduce
the failure after rebase onto master.
Change-Id: Ie46b87a4889d7cee907124796c830db41125cf15
Reviewed-on: http://gerrit.cloudera.org:8080/9493
Tested-by: Impala Public Jenkins
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Before this patch, when there was an error when converting a string to
a decimal, a NULL was returned. In this patch, we change this behavior
so that an error is returned if decimal_v2 is enabled. We also add a
warning if there is an underflow.
The reasoning is that we want stricter behavior in decimal_v2.
Testing:
- Added some EE tests.
- Ran an exhaustive build, which passed.
Change-Id: Icffccac1c1c2361447ae4b0de9b6c2ec7de071db
Reviewed-on: http://gerrit.cloudera.org:8080/9339
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
Revert "IMPALA-6585: increase test_low_mem_limit_q21 limit"
This reverts commit 25bcb258df.
Revert "IMPALA-6588: don't add empty list of ranges in text scan"
This reverts commit d57fbec6f6.
Revert "IMPALA-4835: Part 3: switch I/O buffers to buffer pool"
This reverts commit 24b4ed0b29.
Revert "IMPALA-4835: Part 2: Allocate scan range buffers upfront"
This reverts commit 5699b59d0c.
Revert "IMPALA-4835: Part 1: simplify I/O mgr mem mgmt and cancellation"
This reverts commit 65680dc421.
Change-Id: Ie5ca451cd96602886b0a8ecaa846957df0269cbb
Reviewed-on: http://gerrit.cloudera.org:8080/9480
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
This is the final patch to switch the Disk I/O manager to allocate all
buffer from the buffer pool and to reserve the buffers required for
a query upfront.
* The planner reserves enough memory to run a single scanner per
scan node.
* The multi-threaded scan node must increase reservation before
spinning up more threads.
* The scanner implementations must be careful to stay within their
assigned reservation.
The row-oriented scanners were most straightforward, since they only
have a single scan range active at a time. A single I/O buffer is
sufficient to scan the whole file but more I/O buffers can improve I/O
throughput.
Parquet is more complex because it issues a scan range per column and
the sizes of the columns on disk are not known during planning. To
deal with this, the reservation in the frontend is based on a
heuristic involving the file size and # columns. The Parquet scanner
can then divvy up reservation to columns based on the size of column
data on disk.
I adjusted how the 'mem_limit' is divided between buffer pool and non
buffer pool memory for low mem_limits to account for the increase in
buffer pool memory.
Testing:
* Added more planner tests to cover reservation calcs for scan node.
* Test scanners for all file formats with the reservation denial debug
action, to test behaviour when the scanners hit reservation limits.
* Updated memory and buffer pool limits for tests.
* Added unit tests for dividing reservation between columns in parquet,
since the algorithm is non-trivial.
Perf:
I ran TPC-H and targeted perf locally comparing with master. Both
showed small improvements of a few percent and no regressions of
note. Cluster perf tests showed no significant change.
Change-Id: Ic09c6196b31e55b301df45cc56d0b72cfece6786
Reviewed-on: http://gerrit.cloudera.org:8080/8966
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
crashes impala
Impala crashes on creating a UDF from a shared library (.so file) which
was renamed to have .ll extension. CreateFile() call in GetSymbols()
fails and returns on error and does not close the codegen object. This
patch closes the codegen object on failure. This avoids hitting a DCHECK
later up in the stack.
The chain of failures also invokes the DiagnosticHandlerFn. RuntimeState
object is NULL when the DiagnosticHandlerFn gets called in this case.
This change also adds a check before accessing it for logging.
[localhost:21000] > create function foo4 (string, string) returns string
location '/tmp/bad_udf.ll' symbol='MyAwesomeUdf';
Query: create function foo4 (string, string) returns string location
'/tmp/bad_udf.ll' symbol='MyAwesomeUdf'
ERROR: AnalysisException: Could not load binary: /tmp/bad_udf.ll
LLVM diagnostic error: Invalid bitcode signature
Change-Id: Id060668802ca9c80367cdc0e8a823b968d549bbb
Reviewed-on: http://gerrit.cloudera.org:8080/9154
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Adds support for the Kudu DECIMAL type introduced in Kudu 1.7.0.
Note: Adding support for Kudu decimal min/max filters is
tracked in IMPALA-6533.
Tests:
* Added Kudu create with decimal test to AnalyzeDDLTest.java
* Added Kudu table_format to test_decimal_queries.py
** Both decimal.test and decimal-exprs.test workloads
* Added decimal queries to the following Kudu workloads:
** kudu_create.test
** kudu_delete.test
** kudu_insert.test
** kudu_update.test
** kudu_upsert.test
Change-Id: I3a9fe5acadc53ec198585d765a8cfb0abe56e199
Reviewed-on: http://gerrit.cloudera.org:8080/9368
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Impala Public Jenkins
This change adds support for "clustered", "noclustered", "shuffle" and
"noshuffle" hints in CTAS statement.
Example:
create /*+ clustered,noshuffle */ table t partitioned by (year, month)
as select * from functional.alltypes
The effect of these hints are the same as in insert statements:
clustered:
Sort locally by partition expression before insert to ensure that only
one partition is written at a time. The goal is to reduce the number of
files kept open / buffers kept in memory simultaneously.
noclustered:
Do not sort by primary key before insert to Kudu table. No effect on HDFS
tables currently, as this is their default behavior.
shuffle:
Forces the planner to add an exchange node that repartitions by the
partition expression of the output table. This means that a partition
will be written only by a single node, which minimizes the global
number of simultaneous writes.
If only one partition is written (because all partitioning columns
are constant or the target table is not partitioned), then the shuffle
hint leads to a plan where all rows are merged at the coordinator where
the table sink is executed.
noshuffle:
Do not add exchange node before insert to partitioned tables.
The parser needed some modifications to be able to differentiate between
CREATE statements that allow hints (CTAS), and CREATE statements that
do not (every other type of CREATE statements). As a result, KW_CREATE
was moved from tbl_def_without_col_defs to statement rules.
Testing:
The parser tests mirror the tests of INSERT, while analysis and planner
tests are minimal, as the underlying logic is the same as for INSERT.
Query tests are not created, as the hints have no effect on
the DDL part of CTAS, and the actual query ran is the same as in
the insert case.
Change-Id: I8d74bca999da8ae1bb89427c70841f33e3c56ab0
Reviewed-on: http://gerrit.cloudera.org:8080/8400
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
If the first number in a row group written by Impala is NaN,
then Impala writes incorrect statistics in the metadata.
This will result in incorrect results when filtering the
data.
This commit fixes the read path when encountering NaNs in
Parquet min/max statistics. If min and max are both NaN, we
can't use the statistics at all. If only one of them is NaN,
the other still can be used.
I added some tests to QueryTest/parqet-stats.test
Change-Id: If3897fc1426541239223670812f59e2bed32f455
Reviewed-on: http://gerrit.cloudera.org:8080/9358
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The default value for rpc_max_message_size is 50MB.
Impala currently requires support for messages of
up to 2GB. This changes the value of rpc_max_message_size
to INT_MAX for Impala.
Testing:
- Added a test to test_very_large_strings that generates
a row with multiple large strings. This row requires
that the RPC framework successfully transmit over
400MB. This works for both KRPC and Thrift.
This query operates under the same amount of memory
as other queries in large_strings.test.
- Tested separately that larger row sizes also work,
including tests up to almost 2GB.
Change-Id: I876bba0536e1d85e41eacd9c0aeccfe5c2126e58
Reviewed-on: http://gerrit.cloudera.org:8080/9337
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Tested-by: Impala Public Jenkins
Previously, tuple pointers of a row batch are allocated from
the heap via malloc() and tuple data is allocated from the
MemPool associated with the RowBatch. This change converts
the allocations of tuple pointers and tuple data to using
BufferPool for row batches allocated from KrpcDataStreamRecvr.
The primary motivation for this change is to take advantage of
the fact that buffers allocated from BufferPool always go back
to the per-core arena they came from when they are freed. This
alleviates the TCMalloc imbalance between the RPC service threads
and the fragment execution threads. As described in IMPALA-5518,
row batches are always allocated from the service threads' TCMalloc
cache and placed into the fragment execution threads' TCMalloc cache
when they're freed. This leads to underflow and overflow in those
threads' caches and high contention for the spinlock of the central
free list. With BufferPool, the memory always went back to its
originating arena so this kind of imbalance is less likely to occur.
This also dovetails with the long term plan to put most allocations
under BufferPool and have each operators in the plan reserved
appropriate amount of memory before execution.
Note that the proper reservation mechanism of the exchange node
hasn't yet been implemented in this change so the buffer pool client
handle used for allocating buffers has an ad-hoc set-up of no reservation
limit and using root reservation tracker as parent. This needs to be
fixed as part of IMPALA-6524. The default buffer pool limit is also
bumped to 85% to account for the extra usage from the exchange nodes.
The minimum buffer size is also lowered to 8KB to reduce amount of memory
wastage as a row batch's tuple pointers / tuple data can sometimes be
much smaller than 64KB.
Testing done: Debug core build.
Change-Id: If4b1a45f68b9df0d3b539511e15aff15700246f2
Reviewed-on: http://gerrit.cloudera.org:8080/9344
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Impala Public Jenkins
One of the spilling test was failing because its minimum bufferpool
mem requirement was more when ran on local FS as compared to when
it is run on HDFS.
The fix is to increase the bufferpool limit to a value just above
the min limit so that it still forces spill to disk on both filesystems.
Testing:
Ran core tests with local FS as target file system. Made sure the
failing test passed.
Change-Id: I50648d7936007a26891cf64d6343c47d9d646596
Reviewed-on: http://gerrit.cloudera.org:8080/9354
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Tuple pointers in the generated row batches may not be initialized
if a tuple has byte size 0. There are some codes which compare these
uninitialized pointers against nullptr so having them uninitialized
may return wrong (and non-deterministic) results, e.g.:
impala::TupleIsNullPredicate::GetBooleanVal()
The following query produces non-deterministic results currently:
SELECT count(v.x) FROM functional.alltypestiny t3 LEFT OUTER JOIN (
SELECT true AS x FROM functional.alltypestiny t1 LEFT OUTER JOIN
functional.alltypestiny t2 ON (true)) v ON (v.x = t3.bool_col)
WHERE t3.bool_col = true;
The alltypestiny table has 8 records, 4 records of them has the true
boolean value for bool_col. Therefore, the above query is a fancy
way of saying "8 * 8 * 4", i.e. it should return 256.
The solution is that scanners initialize tuple pointers to a non-null
value when there are no materialized slots. This non-null value is
provided by the new static member Tuple::POISON.
I extended QueryTest/scanners.test with the above query. This test
executes the query against all table formats.
This change has the biggest performance impact on count(*) queries on
large kudu tables. For my quick benchmark I copied tpch_kudu.lineitem
and doubled its data. The resulting table has 12,002,430 rows.
Without this patch 'select count(*) from biglineitem' runs for ~0.12s.
With the patch applied, the overhead is around a dozens of ms. I measured
the query on my desktop PC using a relase build of Impala. On debug builds,
the execution time of the patched version is around 160% of the original
version.
Without this patch:
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| Operator | #Hosts | Avg Time | Max Time | #Rows | Est. #Rows | Peak Mem | Est. Peak Mem | Detail |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| 03:AGGREGATE | 1 | 127.50us | 127.50us | 1 | 1 | 28.00 KB | 10.00 MB | FINALIZE |
| 02:EXCHANGE | 1 | 22.32ms | 22.32ms | 3 | 1 | 0 B | 0 B | UNPARTITIONED |
| 01:AGGREGATE | 3 | 1.78ms | 1.89ms | 3 | 1 | 16.00 KB | 10.00 MB | |
| 00:SCAN KUDU | 3 | 8.00ms | 8.28ms | 12.00M | -1 | 512.00 KB | 0 B | default.biglineitem |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
With this patch:
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| Operator | #Hosts | Avg Time | Max Time | #Rows | Est. #Rows | Peak Mem | Est. Peak Mem | Detail |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| 03:AGGREGATE | 1 | 129.01us | 129.01us | 1 | 1 | 28.00 KB | 10.00 MB | FINALIZE |
| 02:EXCHANGE | 1 | 33.00ms | 33.00ms | 3 | 1 | 0 B | 0 B | UNPARTITIONED |
| 01:AGGREGATE | 3 | 1.99ms | 2.13ms | 3 | 1 | 16.00 KB | 10.00 MB | |
| 00:SCAN KUDU | 3 | 13.13ms | 13.97ms | 12.00M | -1 | 512.00 KB | 0 B | default.biglineitem |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
Change-Id: I298122aaaa7e62eb5971508e0698e189519755de
Reviewed-on: http://gerrit.cloudera.org:8080/9239
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The sender timed out error message diverges between Thrift and KRPC
slightly due to the source address being not readily available with
Thrift RPC implementation. This leads to failure in test_exchange_delays
when KRPC is enabled.
This change fixes the problem by shortening the error message string
to match against.
Testing done: Tested with KRPC enabled in the code and verified the tests passed.
Change-Id: Idd9410381dbb931231c92f084917265e5067b4c9
Reviewed-on: http://gerrit.cloudera.org:8080/9331
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
There were a few places where we accidentally used fully-qualified
table references. As a result, the testTpchViews() test did not
exactly cover what was intended.
Change-Id: I886c451ab61a1739af96eeb765821dfd8e951b07
Reviewed-on: http://gerrit.cloudera.org:8080/9270
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Impala Public Jenkins
In EXPLAIN_LEVEL=2+, change the explain format for parquet predicate
statistics to output each tuple descriptor per line. This change is to
make it consistent with the output of other predicates.
Before:
parquet statistics predicates: c_custkey < 10, o_orderkey < 5, l_linenumber < 3
After:
parquet statistics predicates: c_custkey < 10
parquet statistics predicates on o: o_orderkey < 5
parquet statistics predicates on o_lineitems: l_linenumber < 3
Testing:
- Ran existing planner tests and updated the ones that are affected by
this change.
- Ran end-to-end tests in query_test
Change-Id: Ia3d55ab6a1ae551867a9f68b3622844102cc854e
Reviewed-on: http://gerrit.cloudera.org:8080/9223
Tested-by: Impala Public Jenkins
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
This patch adds changes to the planner to account for memory used by
bloom filters at the fragment instance level. Also adds changes to
allocate memory for those bloom filters from the buffer pool.
Testing:
- Modified Planner Tests and end to end tests to account for memory
reservation for the runtime filters.
- Modified backend tests and benchmarks to use the bufferpool for
bloom filter allocation.
- Add an end to end test.
- Ran rest of the core tests.
Change-Id: Iea2759665fb2e8bef9433014a8d42a7ebf99ce1f
Reviewed-on: http://gerrit.cloudera.org:8080/8971
Reviewed-by: Bikramjeet Vig <bikramjeet.vig@cloudera.com>
Tested-by: Impala Public Jenkins
The encoding was added in an early version of the Parquet
spec and deprecated even in the Parquet 1.0 spec.
Parquet-MR switched to generating RLE at the same time as
the spec changed in mid-2013. Impala always wrote RLE:
see commit 6e293090e6.
The Impala implementation of BIT_PACKED was never correct
because it implemented little endian bit unpacking instead of
the big endian unpacking required by the spec for levels.
Testing:
Updated tests to reflect expected behaviour for supported
and unsupported def level encodings.
Cherry-picks: not for 2.x.
Change-Id: I12c75b7f162dd7de8e26cf31be142b692e3624ae
Reviewed-on: http://gerrit.cloudera.org:8080/9241
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Previously, an analytic function that used the same expr in both the
'partition by' and 'order by' clauses, and where the expr meets the
criteria for being materialized before the sort, would hit an
IllegalStateException due to the expr being inserted into the same
ExprSubstitutionMap twice.
If the values have already been partitioned on the expr, then all of
the values for it in each partition will be the same and also ordering
on the expr doesn't change the results. So, the fix is to simply
exclude the duplicate expr from the 'order by' while still
partitioning on it.
Testing:
- Added a regression test to PlannerTest.
Change-Id: Id5f1d5fbc6f69df5850f96afed345ce27668c30b
Reviewed-on: http://gerrit.cloudera.org:8080/9218
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Introduces a new TBLPROPERTY for controlling stats
extrapolation on a per-table basis:
impala.enable.stats.extrapolation=true/false
The property key was chosen to be consistent with
the impalad startup flag --enable_stats_extrapolation
and to indicate that the property was set and is used
by Impala.
Behavior:
- If the property is not set, then the extrapolation
behavior is determined by the impalad startup flag.
- If the property is set, it overrides the impalad
startup flag, i.e., extrapolation can be explicitly
enabled or disabled regardless of the startup flag.
Testing:
- added new unit tests
- code/hdfs run passed
Change-Id: Ie49597bf1b93b7572106abc620d91f199cba0cfd
Reviewed-on: http://gerrit.cloudera.org:8080/9139
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
This change enables clustering by default. IMPALA-2521 introduced the
'clustered' hint which inserts a local sort by the partitioning columns
to a query plan. The hint is only effective for HDFS and Kudu tables.
Like before, the 'noclustered' hint prevents clustering. If a table has
ordering columns defined, the 'noclustered' hint is ignored and we
issue a warning.
This change removes some tests that were added specifically to test
that clustering can be enabled using the 'clustered' hint. It changes
some tests to use the 'noclustered' hint to make sure that clustering
can be disabled. It also adds tests to make sure that we cover the
'noclustered' case properly.
Cherry-picks: not for 2.x.
Change-Id: Idbf2368cf4415e6ecfa65058daf6ff87ef62f9d9
Reviewed-on: http://gerrit.cloudera.org:8080/9153
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Impala Public Jenkins