Impala already supported RLE encoding for levels and dictionary pages, so
the only task was to integrate it into BoolColumnReader.
A new benchmark, rle-benchmark.cc is added to test the speed of RLE
decoding for different bit widths and run lengths.
There might be a small performance impact on PLAIN encoded booleans,
because of the additional branch when the cache of BoolColumnReader is
filled. As the cache size is 128, I considered this to be outside the
"hot loop".
Testing:
As Impala cannot write RLE encoded bool columns at the moment, parquet-mr
was used to create a test file, testdata/data/rle_encoded_bool.parquet
tests/query_test/test_scanners.py#test_rle_encoded_bools creates a table
that uses this file, and tries to query from it.
Change-Id: I4644bf8cf5d2b7238b05076407fbf78ab5d2c14f
Reviewed-on: http://gerrit.cloudera.org:8080/9403
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
In wait-hdfs-replication, the frequent and eager restart might slow the
HDFS replication down. HDFS should be restarted only if no progress is
made in a certain amount of time, and we should wait longer before
failing the data loading.
Testing: It's tested with a fake HDFS fsck script.
Change-Id: Ib059480254643dc032731b4b3c55204a93b61e77
Reviewed-on: http://gerrit.cloudera.org:8080/9698
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Currently when using a SET_LOOKUP strategy for in-predicates in impala
we use a std:set object for checking membership. This patch takes a
hybrid approach based on benchmarking results and uses boost::flat_set
for int, big int, and float datatypes and boost::unordered_set for the
rest (tiny int, small int, double, string, timestamp, decimal).
The intent of this change is to fix a regression when upgrading the
toolchain to use LLVM 5.0.1 (IMPALA-5980).
Performance:
Ran a query for each data type with a large in predicate containing
500 elements on a single node with mt_dop set to 1.
+-----------+---------------+----------+---------------+----------+
| Data Type | Llvm 3 hybrid | Llvm 3 | Llvm 5 hybrid | Llvm 5 |
+-----------+---------------+----------+---------------+----------+
| Table used: tpch100_parquet.lineitem |
+-----------+---------------+----------+--------------+-----------+
| big int | 17s782ms | 13s941ms | 13s201ms | 25s604ms |
| string | 40s750ms | 64s | 40s723ms | 73s |
| decimal | 13s929ms | 22s272ms | 13s710ms | 34s338ms |
| int | 19s368ms | 11s308ms | 9s169ms | 15s254ms |
+-----------+---------------+----------+--------------+-----------+
| Table used: alltypes with 33638400 rows |
+-----------+---------------+----------+--------------+-----------+
| double | 5s726ms | 5s894ms | 5s595ms | 6s592ms |
| small int | 4s776ms | 5s057ms | 4s740ms | 5s358ms |
| float | 7s223ms | 6s397ms | 6s287ms | 6s926ms |
+-----------+---------------+----------+---------------+----------+
Also added a targeted perf query that uses a large in-predicate
over a decimal column.
Testing:
- Ran expr-test and test_exprs successfully.
Change-Id: Ifd1627d779d10a16468cc3c2d0bc26a497e048df
Reviewed-on: http://gerrit.cloudera.org:8080/9570
Reviewed-by: Bikramjeet Vig <bikramjeet.vig@cloudera.com>
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
We use Java 1.7 in fe/pom.xml, where most of our Java code is. For
consistency, this updates the rest of our Maven configurations to use
the same version of Java. A change I'm working with uses
try-with-resources in HBase splitting, which is how I ran into
this.
Testing: ran core tests
Change-Id: I6cecddf367f00185a14a8b08c03456e3b756bd70
Reviewed-on: http://gerrit.cloudera.org:8080/9600
Reviewed-by: Philip Zeyliger <philip@cloudera.com>
Tested-by: Impala Public Jenkins
The DCHECK was only valid if the Parquet file metadata is internally
consistent, with the number of values reported by the metadata
matching the number of encoded levels.
The DCHECK was intended to directly detect misuse of the RleBatchDecoder
interface, which would lead to incorrect results. However, our other
test coverage for reading Parquet files is sufficient to test the
correctness of level decoding.
Testing:
Added a minimal corrupt test file that reproduces the issue.
Change-Id: Idd6e09f8c8cca8991be5b5b379f6420adaa97daa
Reviewed-on: http://gerrit.cloudera.org:8080/9556
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The bug was that the SortInfo of analytics was given
ordering exprs that were not fully resolved against their
input (e.g. inline views were not resolved).
As a result, the SortInfo logic did not materialize exprs
like rand() coming from inline views.
The fix is to pass fully resolved exprs to the analytic
SortInfo, and then the existing materialization logic
properly handles non-deterministic built-ins and UDFs.
The code around sort generation was rather convoluted
and difficult to understand. I overhauled SortInfo to
unify the different uses of it under a common codepath
After that cleanup, the fix for this issue was trivial.
Testing:
- Locally ran planner tests
- Locally ran analytic EE tests in test_queries.py
- Core/hdfs run passed
Change-Id: Id2b3f4e5e3f1fd441a63160db3c703c432fbb072
Reviewed-on: http://gerrit.cloudera.org:8080/9631
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
The retries in split-hbase.sh don't work in the common case,
because $MINIKDC_PRINC_HIVE is not set in non-kerberized (common)
environments. The regular data load scripts (create-load-data.sh)
have code to manage that, but split-hbase.sh blindly forges ahead,
leading to errors like:
/home/impdev/Impala/testdata/bin/split-hbase.sh: line 49: MINIKDC_PRINC_HIVE: unbound variable
Error in /home/impdev/Impala/testdata/bin/create-load-data.sh at line 48: LOAD_DATA_ARGS=""
Since this hasn't been working, I opted to remove it entirely, as a failure on
the line where HBase splitting actually failed would be significantly more
useful than the error here. A search of mailing lists suggested that I was at
least the second person to have run into this. (In my case, I did break HBase
splitting, but it took me a second to identify the error, since the log was
spammed with unrelated information relating to the cluster restart.)
Testing: core tests.
Change-Id: I715891c9e744f21002330c3ae3ebc14095d94ffd
Reviewed-on: http://gerrit.cloudera.org:8080/9588
Reviewed-by: Philip Zeyliger <philip@cloudera.com>
Tested-by: Impala Public Jenkins
Before Kudu supported DECIMAL columns the TPCDS and TPCH
columns were djusted to use DOUBLE in place of DECIMAL. This
patch undoes that change now that Kudu supports DECIMAL.
Testing:
- Updated concurrent_select.py
- Updated test_tpch_queries.py
- Excersized by the Kudu planner tests
Change-Id: I2f7e4464dc6705cadd610a82c459390a9c0dfe4f
Reviewed-on: http://gerrit.cloudera.org:8080/9484
Reviewed-by: Thomas Tauber-Marshall <tmarshall@cloudera.com>
Tested-by: Impala Public Jenkins
This change allows casting of a string in 'lazy' date/time
format to timestamp. The supported lazy date formats are:
yyyy-[M]M-[d]d
yyyy-[M]M-[d]d [H]H:[m]m:[s]s[.SSSSSSSSS]
[H]H:[m]m:[s]s[.SSSSSSSSS]
We will incur a SCAN performance penalty (approximately 1/2
TotalReadThroughput) when the string is in one of these
lazy date/time format.
Testing:
Benchmarked the performance consequence by executing this SQL on
a private build over 3.8 billion rows:
select min(cast (time_string as timestamp)) from private.impala_5315
Added tests for valid and invalid date/time format strings
in expr-test.cc to be inline with existing tests for CAST() function.
Added end-to-end tests into exprs.test and
select-lazy-timestamp.test to exercise the new function within
the context of a query.
Added tests to exercise the leading and trailing white space trimming
behaviour in default and lazy date/time string format (IMPALA-6630).
Change-Id: Ib9a184a09d7e7783f04d47588537612c2ecec28f
Reviewed-on: http://gerrit.cloudera.org:8080/7009
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
This patch enables pushing scan predicates on
DECIMAL columns down to Kudu.
Testing:
- Added Planner decimal predicate test to kudu.test
- Added Planner decimal in-list test to kudu-selectivity.test
Change-Id: I2569a9e1d58f1c58884d58633d46348364888ed7
Reviewed-on: http://gerrit.cloudera.org:8080/9578
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
We've observed empirically that giving Impala 80% of system memory
doesn't leave enough room for the minicluster and ASAN overhead, leading
to the OOM killer striking during test runs (sometimes). This commit
reduces the threshold to 70%.
This commit also reduces the memory usage of semi-joins-exhaustive.test
by roughly halving the number of records it deals with. This was
necessary for tests to pass on a machine with 32GB of RAM.
Testing: I've run the ASAN build (more) happily with this change.
I've run exhaustive tests on a 32GB machine.
Change-Id: Iabca7a95560bd27c2de2b0a147ee9a3c45199db7
Reviewed-on: http://gerrit.cloudera.org:8080/9395
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
HDFS sometimes fails to fully replicate all the blocks in 30 seconds
and no progress is made. This patch tries to restart HDFS several times
before aborting the data loading.
Change-Id: Iefd4c2fc6c287f054e385de52bdc42b0bdbd7915
Reviewed-on: http://gerrit.cloudera.org:8080/9469
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Quick fix of Parquet write path until the Parquet community
agrees on the ordering of floating point numbers.
The behavior follows the way fmax()/fmin() works, ie. Impala
will only write NaN into the stats when all the values are NaNs.
This behavior is aligned with the quick fix of Parquet-CPP.
Added e2e tests as well.
Change-Id: I3957806948f7c661af4be5495f2ec92d1e9fc9d6
Reviewed-on: http://gerrit.cloudera.org:8080/9381
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Impala Public Jenkins
IMPALA-6592 revealed a gap in test coverage for files with
invalid/unsupported Parquet codecs. This adds a test that reproduces the
bug that was present in my IMPALA-4835 patch. master is unaffected by
this bug.
I also hid the conversion tables and made the conversion go through
functions that validate the enum values, to make it easier to track down
problems like this in the future.
Testing:
Ran exhaustive tests.
Change-Id: I1502ea7b7f39aa09f0ed2677e84219b37c64c416
Reviewed-on: http://gerrit.cloudera.org:8080/9500
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The bug is that the right child of a blocking join node could be closed
before the builder if an error was encountered when sending a batch to
the sink. This hits a DCHECK because Buffers owned by the sink may still
be accounted against the child node.
Testing:
Added the test that originally triggered the problem. It reproduced the
failure when based on the IMPALA-4835 patch, but I can't reproduce
the failure after rebase onto master.
Change-Id: Ie46b87a4889d7cee907124796c830db41125cf15
Reviewed-on: http://gerrit.cloudera.org:8080/9493
Tested-by: Impala Public Jenkins
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Before this patch, when there was an error when converting a string to
a decimal, a NULL was returned. In this patch, we change this behavior
so that an error is returned if decimal_v2 is enabled. We also add a
warning if there is an underflow.
The reasoning is that we want stricter behavior in decimal_v2.
Testing:
- Added some EE tests.
- Ran an exhaustive build, which passed.
Change-Id: Icffccac1c1c2361447ae4b0de9b6c2ec7de071db
Reviewed-on: http://gerrit.cloudera.org:8080/9339
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
Revert "IMPALA-6585: increase test_low_mem_limit_q21 limit"
This reverts commit 25bcb258df.
Revert "IMPALA-6588: don't add empty list of ranges in text scan"
This reverts commit d57fbec6f6.
Revert "IMPALA-4835: Part 3: switch I/O buffers to buffer pool"
This reverts commit 24b4ed0b29.
Revert "IMPALA-4835: Part 2: Allocate scan range buffers upfront"
This reverts commit 5699b59d0c.
Revert "IMPALA-4835: Part 1: simplify I/O mgr mem mgmt and cancellation"
This reverts commit 65680dc421.
Change-Id: Ie5ca451cd96602886b0a8ecaa846957df0269cbb
Reviewed-on: http://gerrit.cloudera.org:8080/9480
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Impala Public Jenkins
When loading from an up-to-date snapshot, dataload will
load all of the metadata and load data into HDFS. Then,
it will skip load-data.py for functional/exhaustive,
tpch/core, and tpcds/core. It will invoke a special
round of load-data.py calls to populate Kudu tables,
and it always runs these with a force reload.
However, when loading from an old snapshot, dataload will
still load all of the metadata and load the data into
HDFS, but then it will still invoke load-data.py for
functional/exhaustive, tpch/core, and tpcds/core.
These invocations mostly do DDLs with very few load
statements. However, these invocations are a problem
for Kudu. The metadata of Impala tables referencing
Kudu entities have been imported along with all the other
metadata, but the Kudu entities have not been created, as
they are separate from HDFS. This means that Kudu tables
are not really valid in this circumstance.
Since Kudu has been added to the list of data formats
for tpch/core (see IMPALA-6475), load-data.py with
tpch/core will attempt to insert into these invalid
Kudu tables.
To avoid this, always force reload any Kudu tables.
generate-schema-statements.py will always generate a
drop table statement before any create of a Kudu table.
This guarantees that the create will also create the
corresponding Kudu entity.
Change-Id: I2d07f3513c543e2590f2f62b96b37472316868ee
Reviewed-on: http://gerrit.cloudera.org:8080/9445
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Tested-by: Impala Public Jenkins
IMPALA-5752 added support for Kudu decimal. As a part
of that, it added Kudu versions of decimal_tbl and
decimal_tiny. Kudu tables are created and loaded even
on local tests, so these tables are loaded when they
previously weren't. The LOAD sections for these tables
rely on executing HDFS commmands to copy data to
appropriate locations. These HDFS commands cannot work
on local tests, causing this failure.
Untangling when to execute LOAD sections is complicated,
so this simply switches the decimal_tbl and decimal_tiny
to do LOAD DATA LOCAL calls, which do not rely on HDFS
commands.
Change-Id: I1f717917269d116c07a6f17944583f5e8faf2932
Reviewed-on: http://gerrit.cloudera.org:8080/9438
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
This is the final patch to switch the Disk I/O manager to allocate all
buffer from the buffer pool and to reserve the buffers required for
a query upfront.
* The planner reserves enough memory to run a single scanner per
scan node.
* The multi-threaded scan node must increase reservation before
spinning up more threads.
* The scanner implementations must be careful to stay within their
assigned reservation.
The row-oriented scanners were most straightforward, since they only
have a single scan range active at a time. A single I/O buffer is
sufficient to scan the whole file but more I/O buffers can improve I/O
throughput.
Parquet is more complex because it issues a scan range per column and
the sizes of the columns on disk are not known during planning. To
deal with this, the reservation in the frontend is based on a
heuristic involving the file size and # columns. The Parquet scanner
can then divvy up reservation to columns based on the size of column
data on disk.
I adjusted how the 'mem_limit' is divided between buffer pool and non
buffer pool memory for low mem_limits to account for the increase in
buffer pool memory.
Testing:
* Added more planner tests to cover reservation calcs for scan node.
* Test scanners for all file formats with the reservation denial debug
action, to test behaviour when the scanners hit reservation limits.
* Updated memory and buffer pool limits for tests.
* Added unit tests for dividing reservation between columns in parquet,
since the algorithm is non-trivial.
Perf:
I ran TPC-H and targeted perf locally comparing with master. Both
showed small improvements of a few percent and no regressions of
note. Cluster perf tests showed no significant change.
Change-Id: Ic09c6196b31e55b301df45cc56d0b72cfece6786
Reviewed-on: http://gerrit.cloudera.org:8080/8966
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
crashes impala
Impala crashes on creating a UDF from a shared library (.so file) which
was renamed to have .ll extension. CreateFile() call in GetSymbols()
fails and returns on error and does not close the codegen object. This
patch closes the codegen object on failure. This avoids hitting a DCHECK
later up in the stack.
The chain of failures also invokes the DiagnosticHandlerFn. RuntimeState
object is NULL when the DiagnosticHandlerFn gets called in this case.
This change also adds a check before accessing it for logging.
[localhost:21000] > create function foo4 (string, string) returns string
location '/tmp/bad_udf.ll' symbol='MyAwesomeUdf';
Query: create function foo4 (string, string) returns string location
'/tmp/bad_udf.ll' symbol='MyAwesomeUdf'
ERROR: AnalysisException: Could not load binary: /tmp/bad_udf.ll
LLVM diagnostic error: Invalid bitcode signature
Change-Id: Id060668802ca9c80367cdc0e8a823b968d549bbb
Reviewed-on: http://gerrit.cloudera.org:8080/9154
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Adds support for the Kudu DECIMAL type introduced in Kudu 1.7.0.
Note: Adding support for Kudu decimal min/max filters is
tracked in IMPALA-6533.
Tests:
* Added Kudu create with decimal test to AnalyzeDDLTest.java
* Added Kudu table_format to test_decimal_queries.py
** Both decimal.test and decimal-exprs.test workloads
* Added decimal queries to the following Kudu workloads:
** kudu_create.test
** kudu_delete.test
** kudu_insert.test
** kudu_update.test
** kudu_upsert.test
Change-Id: I3a9fe5acadc53ec198585d765a8cfb0abe56e199
Reviewed-on: http://gerrit.cloudera.org:8080/9368
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Impala Public Jenkins
This change adds support for "clustered", "noclustered", "shuffle" and
"noshuffle" hints in CTAS statement.
Example:
create /*+ clustered,noshuffle */ table t partitioned by (year, month)
as select * from functional.alltypes
The effect of these hints are the same as in insert statements:
clustered:
Sort locally by partition expression before insert to ensure that only
one partition is written at a time. The goal is to reduce the number of
files kept open / buffers kept in memory simultaneously.
noclustered:
Do not sort by primary key before insert to Kudu table. No effect on HDFS
tables currently, as this is their default behavior.
shuffle:
Forces the planner to add an exchange node that repartitions by the
partition expression of the output table. This means that a partition
will be written only by a single node, which minimizes the global
number of simultaneous writes.
If only one partition is written (because all partitioning columns
are constant or the target table is not partitioned), then the shuffle
hint leads to a plan where all rows are merged at the coordinator where
the table sink is executed.
noshuffle:
Do not add exchange node before insert to partitioned tables.
The parser needed some modifications to be able to differentiate between
CREATE statements that allow hints (CTAS), and CREATE statements that
do not (every other type of CREATE statements). As a result, KW_CREATE
was moved from tbl_def_without_col_defs to statement rules.
Testing:
The parser tests mirror the tests of INSERT, while analysis and planner
tests are minimal, as the underlying logic is the same as for INSERT.
Query tests are not created, as the hints have no effect on
the DDL part of CTAS, and the actual query ran is the same as in
the insert case.
Change-Id: I8d74bca999da8ae1bb89427c70841f33e3c56ab0
Reviewed-on: http://gerrit.cloudera.org:8080/8400
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
If the first number in a row group written by Impala is NaN,
then Impala writes incorrect statistics in the metadata.
This will result in incorrect results when filtering the
data.
This commit fixes the read path when encountering NaNs in
Parquet min/max statistics. If min and max are both NaN, we
can't use the statistics at all. If only one of them is NaN,
the other still can be used.
I added some tests to QueryTest/parqet-stats.test
Change-Id: If3897fc1426541239223670812f59e2bed32f455
Reviewed-on: http://gerrit.cloudera.org:8080/9358
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The default value for rpc_max_message_size is 50MB.
Impala currently requires support for messages of
up to 2GB. This changes the value of rpc_max_message_size
to INT_MAX for Impala.
Testing:
- Added a test to test_very_large_strings that generates
a row with multiple large strings. This row requires
that the RPC framework successfully transmit over
400MB. This works for both KRPC and Thrift.
This query operates under the same amount of memory
as other queries in large_strings.test.
- Tested separately that larger row sizes also work,
including tests up to almost 2GB.
Change-Id: I876bba0536e1d85e41eacd9c0aeccfe5c2126e58
Reviewed-on: http://gerrit.cloudera.org:8080/9337
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Tested-by: Impala Public Jenkins
Previously, tuple pointers of a row batch are allocated from
the heap via malloc() and tuple data is allocated from the
MemPool associated with the RowBatch. This change converts
the allocations of tuple pointers and tuple data to using
BufferPool for row batches allocated from KrpcDataStreamRecvr.
The primary motivation for this change is to take advantage of
the fact that buffers allocated from BufferPool always go back
to the per-core arena they came from when they are freed. This
alleviates the TCMalloc imbalance between the RPC service threads
and the fragment execution threads. As described in IMPALA-5518,
row batches are always allocated from the service threads' TCMalloc
cache and placed into the fragment execution threads' TCMalloc cache
when they're freed. This leads to underflow and overflow in those
threads' caches and high contention for the spinlock of the central
free list. With BufferPool, the memory always went back to its
originating arena so this kind of imbalance is less likely to occur.
This also dovetails with the long term plan to put most allocations
under BufferPool and have each operators in the plan reserved
appropriate amount of memory before execution.
Note that the proper reservation mechanism of the exchange node
hasn't yet been implemented in this change so the buffer pool client
handle used for allocating buffers has an ad-hoc set-up of no reservation
limit and using root reservation tracker as parent. This needs to be
fixed as part of IMPALA-6524. The default buffer pool limit is also
bumped to 85% to account for the extra usage from the exchange nodes.
The minimum buffer size is also lowered to 8KB to reduce amount of memory
wastage as a row batch's tuple pointers / tuple data can sometimes be
much smaller than 64KB.
Testing done: Debug core build.
Change-Id: If4b1a45f68b9df0d3b539511e15aff15700246f2
Reviewed-on: http://gerrit.cloudera.org:8080/9344
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Impala Public Jenkins
One of the spilling test was failing because its minimum bufferpool
mem requirement was more when ran on local FS as compared to when
it is run on HDFS.
The fix is to increase the bufferpool limit to a value just above
the min limit so that it still forces spill to disk on both filesystems.
Testing:
Ran core tests with local FS as target file system. Made sure the
failing test passed.
Change-Id: I50648d7936007a26891cf64d6343c47d9d646596
Reviewed-on: http://gerrit.cloudera.org:8080/9354
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Tuple pointers in the generated row batches may not be initialized
if a tuple has byte size 0. There are some codes which compare these
uninitialized pointers against nullptr so having them uninitialized
may return wrong (and non-deterministic) results, e.g.:
impala::TupleIsNullPredicate::GetBooleanVal()
The following query produces non-deterministic results currently:
SELECT count(v.x) FROM functional.alltypestiny t3 LEFT OUTER JOIN (
SELECT true AS x FROM functional.alltypestiny t1 LEFT OUTER JOIN
functional.alltypestiny t2 ON (true)) v ON (v.x = t3.bool_col)
WHERE t3.bool_col = true;
The alltypestiny table has 8 records, 4 records of them has the true
boolean value for bool_col. Therefore, the above query is a fancy
way of saying "8 * 8 * 4", i.e. it should return 256.
The solution is that scanners initialize tuple pointers to a non-null
value when there are no materialized slots. This non-null value is
provided by the new static member Tuple::POISON.
I extended QueryTest/scanners.test with the above query. This test
executes the query against all table formats.
This change has the biggest performance impact on count(*) queries on
large kudu tables. For my quick benchmark I copied tpch_kudu.lineitem
and doubled its data. The resulting table has 12,002,430 rows.
Without this patch 'select count(*) from biglineitem' runs for ~0.12s.
With the patch applied, the overhead is around a dozens of ms. I measured
the query on my desktop PC using a relase build of Impala. On debug builds,
the execution time of the patched version is around 160% of the original
version.
Without this patch:
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| Operator | #Hosts | Avg Time | Max Time | #Rows | Est. #Rows | Peak Mem | Est. Peak Mem | Detail |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| 03:AGGREGATE | 1 | 127.50us | 127.50us | 1 | 1 | 28.00 KB | 10.00 MB | FINALIZE |
| 02:EXCHANGE | 1 | 22.32ms | 22.32ms | 3 | 1 | 0 B | 0 B | UNPARTITIONED |
| 01:AGGREGATE | 3 | 1.78ms | 1.89ms | 3 | 1 | 16.00 KB | 10.00 MB | |
| 00:SCAN KUDU | 3 | 8.00ms | 8.28ms | 12.00M | -1 | 512.00 KB | 0 B | default.biglineitem |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
With this patch:
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| Operator | #Hosts | Avg Time | Max Time | #Rows | Est. #Rows | Peak Mem | Est. Peak Mem | Detail |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
| 03:AGGREGATE | 1 | 129.01us | 129.01us | 1 | 1 | 28.00 KB | 10.00 MB | FINALIZE |
| 02:EXCHANGE | 1 | 33.00ms | 33.00ms | 3 | 1 | 0 B | 0 B | UNPARTITIONED |
| 01:AGGREGATE | 3 | 1.99ms | 2.13ms | 3 | 1 | 16.00 KB | 10.00 MB | |
| 00:SCAN KUDU | 3 | 13.13ms | 13.97ms | 12.00M | -1 | 512.00 KB | 0 B | default.biglineitem |
+--------------+--------+----------+----------+--------+------------+-----------+---------------+---------------------+
Change-Id: I298122aaaa7e62eb5971508e0698e189519755de
Reviewed-on: http://gerrit.cloudera.org:8080/9239
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The sender timed out error message diverges between Thrift and KRPC
slightly due to the source address being not readily available with
Thrift RPC implementation. This leads to failure in test_exchange_delays
when KRPC is enabled.
This change fixes the problem by shortening the error message string
to match against.
Testing done: Tested with KRPC enabled in the code and verified the tests passed.
Change-Id: Idd9410381dbb931231c92f084917265e5067b4c9
Reviewed-on: http://gerrit.cloudera.org:8080/9331
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
There were a few places where we accidentally used fully-qualified
table references. As a result, the testTpchViews() test did not
exactly cover what was intended.
Change-Id: I886c451ab61a1739af96eeb765821dfd8e951b07
Reviewed-on: http://gerrit.cloudera.org:8080/9270
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Impala Public Jenkins
In EXPLAIN_LEVEL=2+, change the explain format for parquet predicate
statistics to output each tuple descriptor per line. This change is to
make it consistent with the output of other predicates.
Before:
parquet statistics predicates: c_custkey < 10, o_orderkey < 5, l_linenumber < 3
After:
parquet statistics predicates: c_custkey < 10
parquet statistics predicates on o: o_orderkey < 5
parquet statistics predicates on o_lineitems: l_linenumber < 3
Testing:
- Ran existing planner tests and updated the ones that are affected by
this change.
- Ran end-to-end tests in query_test
Change-Id: Ia3d55ab6a1ae551867a9f68b3622844102cc854e
Reviewed-on: http://gerrit.cloudera.org:8080/9223
Tested-by: Impala Public Jenkins
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
This patch adds changes to the planner to account for memory used by
bloom filters at the fragment instance level. Also adds changes to
allocate memory for those bloom filters from the buffer pool.
Testing:
- Modified Planner Tests and end to end tests to account for memory
reservation for the runtime filters.
- Modified backend tests and benchmarks to use the bufferpool for
bloom filter allocation.
- Add an end to end test.
- Ran rest of the core tests.
Change-Id: Iea2759665fb2e8bef9433014a8d42a7ebf99ce1f
Reviewed-on: http://gerrit.cloudera.org:8080/8971
Reviewed-by: Bikramjeet Vig <bikramjeet.vig@cloudera.com>
Tested-by: Impala Public Jenkins
The encoding was added in an early version of the Parquet
spec and deprecated even in the Parquet 1.0 spec.
Parquet-MR switched to generating RLE at the same time as
the spec changed in mid-2013. Impala always wrote RLE:
see commit 6e293090e6.
The Impala implementation of BIT_PACKED was never correct
because it implemented little endian bit unpacking instead of
the big endian unpacking required by the spec for levels.
Testing:
Updated tests to reflect expected behaviour for supported
and unsupported def level encodings.
Cherry-picks: not for 2.x.
Change-Id: I12c75b7f162dd7de8e26cf31be142b692e3624ae
Reviewed-on: http://gerrit.cloudera.org:8080/9241
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Previously, an analytic function that used the same expr in both the
'partition by' and 'order by' clauses, and where the expr meets the
criteria for being materialized before the sort, would hit an
IllegalStateException due to the expr being inserted into the same
ExprSubstitutionMap twice.
If the values have already been partitioned on the expr, then all of
the values for it in each partition will be the same and also ordering
on the expr doesn't change the results. So, the fix is to simply
exclude the duplicate expr from the 'order by' while still
partitioning on it.
Testing:
- Added a regression test to PlannerTest.
Change-Id: Id5f1d5fbc6f69df5850f96afed345ce27668c30b
Reviewed-on: http://gerrit.cloudera.org:8080/9218
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Introduces a new TBLPROPERTY for controlling stats
extrapolation on a per-table basis:
impala.enable.stats.extrapolation=true/false
The property key was chosen to be consistent with
the impalad startup flag --enable_stats_extrapolation
and to indicate that the property was set and is used
by Impala.
Behavior:
- If the property is not set, then the extrapolation
behavior is determined by the impalad startup flag.
- If the property is set, it overrides the impalad
startup flag, i.e., extrapolation can be explicitly
enabled or disabled regardless of the startup flag.
Testing:
- added new unit tests
- code/hdfs run passed
Change-Id: Ie49597bf1b93b7572106abc620d91f199cba0cfd
Reviewed-on: http://gerrit.cloudera.org:8080/9139
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
This change enables clustering by default. IMPALA-2521 introduced the
'clustered' hint which inserts a local sort by the partitioning columns
to a query plan. The hint is only effective for HDFS and Kudu tables.
Like before, the 'noclustered' hint prevents clustering. If a table has
ordering columns defined, the 'noclustered' hint is ignored and we
issue a warning.
This change removes some tests that were added specifically to test
that clustering can be enabled using the 'clustered' hint. It changes
some tests to use the 'noclustered' hint to make sure that clustering
can be disabled. It also adds tests to make sure that we cover the
'noclustered' case properly.
Cherry-picks: not for 2.x.
Change-Id: Idbf2368cf4415e6ecfa65058daf6ff87ef62f9d9
Reviewed-on: http://gerrit.cloudera.org:8080/9153
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Impala Public Jenkins
Change the expected result type of Kudu TPCH Q17 to Decimal because
DECIMAL_V2 is now enabled by default. This was not done earlier because
we were not running TPCH on Kudu regularly.
Cherry-picks: not for 2.x
Change-Id: I46fc038d40969547622707ce77a037494f0ed0a9
Reviewed-on: http://gerrit.cloudera.org:8080/9208
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Impala Public Jenkins
Based on the existing Parquet column chunk level statistics null_count,
Impala's Parquet scanner is enhanced to skip an entire row group if the
null_count statistics indicate that all the values under the predicated
column are NULL as we wouldn't get any result rows from that row group
anyway.
Change-Id: I141317af0e0df30da8f220b29b0bfba364f40ddf
Reviewed-on: http://gerrit.cloudera.org:8080/9140
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Currently the catalog data is compressed in the statestore, but
uncompressed when passed between FE and BE. It results in a ~2GB limit
on the metadata. IMPALA-3499 introduced a workaround in the impalad but
there isn't one in the catalogd. This patch aims to increase the size
limit for statestore updates, reduce the copying of the metadata and
reduce the memory footprint. With this patch, the catalog objects are
passed and (de)compressed between FE and BE one at a time. The new
limits are:
- A single catalog object cannot be larger than ~2GB.
- A statestore catalog update cannot be larger than ~4GB. It is
compressed size if FLAGS_compact_catalog_topic is true.
The behavior of the catalog op executer is not changed. The data is not
compressed and the size limit is still 2GB.
Testing: Ran existing tests. A test for compressing and decompressing
catalog objects is added. Manually tested with a 1.95GB catalog object
and a 3.90 GB uncompressed statestore update.
Change-Id: I3a8819cad734b3a416eef6c954e55b73cc6023ae
Reviewed-on: http://gerrit.cloudera.org:8080/8825
Reviewed-by: Tianyi Wang <twang@cloudera.com>
Tested-by: Impala Public Jenkins
This patch reserves SQL:2016 reserved words, excluding:
1. Impala builtin function names.
2. Time unit words(year, month, etc.).
3. An exception list based on a discussion.
Some test cases are modified to avoid these words. A impalad and
catalogd startup option reserved_words_version is added. The words are
reserved if the option is set to "3.0.0".
Change-Id: If1b295e6a77e840cf1b794c2eb73e1b9d2b8ddd6
Reviewed-on: http://gerrit.cloudera.org:8080/9096
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Reviewed-by: Philip Zeyliger <philip@cloudera.com>
Tested-by: Impala Public Jenkins
The 'compute stats' statement currently computes column-level
statistics for all columns of a table.
This adds potentially unneeded work for columns whose stats
are not needed by queries. It can be especially costly for
very wide tables and unneeded large string fields.
This change modifies the 'compute stats' (non-incremental only)
to support a user-specified list of columns for which stats
should be computed. An example with the extension is as follows:
compute stats my_db.my_table(column_a, column_b);
While the phrase "for columns ..." is commonly used, since
'compute stats' seems fairly unique (vs. 'analyze table ...'),
this change favors brevity with the parenthesized column list.
Whereas currently 'compute stats' is applied to the columns that
can be analyzed, the 'compute stats' in this change results in
an error when a column is specified that cannot be analyzed
(e.g., column does not exist, column is of an unsupported type,
column is a partitioning column). Moreover, an empty column
list can be supplied which means that no columns will be analyzed.
Testing:
- analyzing a subset of columns is already supported (e.g., not all
columns can be analyzed), so the focus with testing is to check
that the user-specified columns are handled as expected.
- tests include: parser tests, ddl analysis, end-to-end tests.
Change-Id: If8b25dd248e578dc7ddd35468125cca12d1b9f27
Reviewed-on: http://gerrit.cloudera.org:8080/9133
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Adds a concept of a "removed" query option that has no effect but does
not return an error when a user attempts to set it. These options are
not returned by "set" or "set all" commands that are executed in
impala-shell or server-side.
These query options have been deprecated for several releases:
DEFAULT_ORDER_BY_LIMIT, ABORT_ON_DEFAULT_LIMIT_EXCEEDED,
V_CPU_CORES, RESERVATION_REQUEST_TIMEOUT, RM_INITIAL_MEM,
SCAN_NODE_CODEGEN_THRESHOLD, MAX_IO_BUFFERS
RM_INITIAL_MEM did still have an effect, but it was undocumented and
MEM_LIMIT should be used in preference.
DISABLE_CACHED_READS also had an effect but it was documented as
deprecated.
Otherwise the options had no effect at all.
Testing:
Ran exhaustive build.
Updated query option tests to reflect the new behaviour.
Cherry-picks: not for 2.x.
Change-Id: I9e742e9b0eca0e5c81fd71db3122fef31522fcad
Reviewed-on: http://gerrit.cloudera.org:8080/9118
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
Escapes the following special characters in RE2 library:
.\+*?[^]$(){}=!<>|:-
Testing:
Add some unit tests into ExprTest.StringRegexpFunctions
Add some E2E tests into exprs.test
Change-Id: I84c3e0ded26f6eb20794c38b75be9b25cd111e4b
Reviewed-on: http://gerrit.cloudera.org:8080/8900
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
In this commit we enable Decimal_V2 by default. We also update the
expected results in many of our tests.
Testing:
Ran an exhaustive test which almost passed. Updated the few failed
tests in it.
Cherry-pick: not for 2.x
Change-Id: Ibbdd05bf986b7947f106b396017faa3a0bd87fd7
Reviewed-on: http://gerrit.cloudera.org:8080/9062
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Impala Public Jenkins
We should not perform alias substitution in the
subexpressions of GROUP BY, HAVING, and ORDER BY
to be more standard conformant.
=== Allowed ===
SELECT int_col / 2 AS x
FROM functional.alltypes
GROUP BY x;
SELECT int_col / 2 AS x
FROM functional.alltypes
ORDER BY x;
SELECT NOT bool_col AS nb
FROM functional.alltypes
GROUP BY nb
HAVING nb;
=== Not allowed ===
SELECT int_col / 2 AS x
FROM functional.alltypes
GROUP BY x / 2;
SELECT int_col / 2 AS x
FROM functional.alltypes
ORDER BY -x;
SELECT int_col / 2 AS x
FROM functional.alltypes
GROUP BY x
HAVING x > 3;
Some extra checks were added to AnalyzeExprsTest.java.
I had to update other tests to make them pass
since the new behavior is more restrictive.
I added alias.test to the end-to-end tests.
Cherry-picks: not for 2.x.
Change-Id: I0f82483b486acf6953876cfa672b0d034f3709a8
Reviewed-on: http://gerrit.cloudera.org:8080/8801
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
Currently we do not codegen CHAR types. This change checks
for CHAR literals in a expr and disables codegen.
Change-Id: I7e4e27350c53bc69ce412a004e392e7480214f73
Reviewed-on: http://gerrit.cloudera.org:8080/9102
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
This change disallows explicitly setting the Kudu table name property
for managed Kudu tables in a CREATE TABLE statement. The Kudu table
name property gets a generated value as the following:
'impala::db_name.table_name' where table_name is the one given in
the CREATE TABLE statement.
Providing the Kudu table name property when creating a managed Kudu
table results in an error without creating the table. E.g.:
CREATE TABLE t (i INT) STORED AS KUDU
TBLPROPERTIES('kudu.table_name'='some_name');
Alongside the CREATE TABLE statement also the ALTER TABLE statement
is changed not to allow the modification of Kudu.table_name of
managed Kudu tables.
Change-Id: Ieca037498abf8f5fde67b77e824b720482cdbe6f
Reviewed-on: http://gerrit.cloudera.org:8080/8820
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Impala Public Jenkins