Testing: Ran the tests locally in a loop. Did a core/debug/hdfs
private build.
Change-Id: I0c56df0c6a5f771222dedb69353f8bebe01d5a90
Reviewed-on: http://gerrit.cloudera.org:8080/4302
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
This commit fixes two issues in toSql() of DistributeParam:
1. string literals were not quoted
2. range partition split rows were not printed.
Besides, this commit fixes a small issue in run-hive-server.sh
Change-Id: I984a63a24f02670347b0e1efceb864d265d1f931
Reviewed-on: http://gerrit.cloudera.org:8080/4195
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Adds new parametrization to the unique database fixture:
- num_dbs: allows creating multiple unique databases at once;
the 2nd, 3rd, etc. datbase name is generated by appending
"2", "3", etc., to the first database name
- sync_ddl: allows creating the dabatases(s) with sync_ddl
which is needed by most tests in test_ddl.py
Testing: I ran debug/core and debug/exhaustive on HDFS and
core/debug on S3. Also ran the test locally in a loop on
exhaustive.
Change-Id: Idf667dd5e960768879c019e2037cf48ad4e4241b
Reviewed-on: http://gerrit.cloudera.org:8080/4155
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
The underlying issue IMPALA-2479 has been fixed, so it
should be safe to execute these tests in parallel again:
- test_runtime_filters.py (all tests)
- test_scanners.py::TestParquet::test_multiple_blocks
- test_scanners.py::testParquet::test_multiple_blocks_one_row_group
Testing: Ran the tests locally in a loop. Did a private core/hdfs run.
Change-Id: I8f046e67eb1de1c6ff87980f906870ec9816f551
Reviewed-on: http://gerrit.cloudera.org:8080/4291
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Internal Jenkins
This changes the code to use the lddqu and movdqu instructions (via
Intel intrinsics) to allow unaligned memory access.
Change-Id: I39b2b47bb717d5ac9727512a24fcf8a8a6a8dcc6
Reviewed-on: http://gerrit.cloudera.org:8080/4205
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
Before this patch, Impala would not account for the memory used to
aggregate runtime filters together in the coordinator. Impala's memory
could therefore be silently overcommitted.
This patch accounts for aggregated filter memory in a new filter
memtracker that is attached to the coordinator's query_mem_tracker(). If
the query memory limit is exceeded when a filter update arrives, that
update is discarded. If the filter is from a partitioned join, the
entire filter can therefore be discarded immediately (to alleviate
memory pressure) and a dummy 'always true' filter is sent to backends to
unblock them.
If the filter is from a broadcast join, no aggregation is done, so there
is no tracking. The Thrift input and output filter data structures are
not tracked (as we generally don't track RPC objects, but plan to in the
future). The filter payload is moved from the input request structure to
the output broadcast structure without copying.
Memory that is added to a memtracker must always be released. To do
this, we need to signal to the coordinator that it is finished, and that
there is no point trying to process any future updates that might arrive
concurrently. This patch adds Coordinator::Done() which is called from
QueryExecState::Done(), and which releases memory from all in-process
runtime filters.
Finally, this patch increases the upper limit for runtime filters to
512MB. This allows testing on very large datasets. The default maximum
is still 16MB, per RUNTIME_FILTER_MAX_SIZE.
Testing: Added a new test that triggers the OOM condition on the
coordinator. All existing runtime filter tests pass.
Change-Id: I3c52c8a1c2e79ef370c77bf264885fc859678d1b
Reviewed-on: http://gerrit.cloudera.org:8080/4066
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
Memory from the build side of a nested loop join is
referenced by its output batches, so accumulated memory
build side resources must be transferred to the caller.
Special-cased handling of empty batches did not transfer
the memory. The fix is to accumulate empty batches and
transfer their resources in the same way as non-empty
batches. The iterator required changes to handle empty
batches in the list.
Testing:
Added a unit test that exercises the bug RowBatchList.
Add a query test that causes a crash in the ASAN build
and incorrect results in the debug build.
Change-Id: I3cb19e536b87bbb4d4ae82d1636ba1463a422789
Reviewed-on: http://gerrit.cloudera.org:8080/4182
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
Fixes inserts into partitioned tables that have a shuffle hint and only constant
partition exprs. The rows to be inserted are merged at the coordinator where
the table sink is executed. There is no need to hash exchange rows.
Now accepts insert hints when inserting into unpartitioned tables. The shuffle
hint leads to a plan where all rows are merged at the coordinator where
the table sink is executed.
Change-Id: I1084d49c95b7d867eeac3297fd2016daff0ab687
Reviewed-on: http://gerrit.cloudera.org:8080/4162
Reviewed-by: Marcel Kornacker <marcel@cloudera.com>
Tested-by: Internal Jenkins
1. Minor fixes for cardinality estimation of unpartitioned tables.
2. Reworks handling of corrupt table stats as follows:
The stats of a table or partition are reported as corrupt if the
numRows < -1, or if numRows == 0 but the table size is positive.
3. Removes the Preconditions check reported in IMPALA-1657 in favor
or issuing a corrupt table stats warning.
4. Fixes a few tests to set numRows together with
STATS_GENERATED_VIA_STATS_TASK so that the numRows is definitely
set in the HMS.
Change-Id: I1d3305791d96e1c23a901af7b7c109af9352bb44
Reviewed-on: http://gerrit.cloudera.org:8080/4166
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
When deciding between a broadcast or repartition join, Impala calculates
the cost of each join as the total amount of data that is sent over the
network. This ignores some relevant costs, and can lead to bad plans.
One such relevant cost is the work to create the hash table used in the
join. This patch accounts for this by adding the amount of data inserted
into the hash table (the size of the right side of the join) to the
previous cost.
This generally increases the estimated cost of broadcast joins relative
to repartitioning joins, as the broadcast join must build the hash table
on each node the data was broadcast to, so its effect will be to make
repartitioning joins more likely to be chosen, especially in large
clusters.
This patch has not yet been performance tested.
Change-Id: I03a0f56f69c8deae68d48dfdb9dc95b71aec11f1
Reviewed-on: http://gerrit.cloudera.org:8080/4098
Tested-by: Internal Jenkins
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
For Avro tables the column information in the underlying database of the
Hive metastore can be different from what is specified in the avro
schema. HIVE-6308 aimed to improve upon this, but for older tables the
two don't necessarily align.
There are two possible cases:
1) Hive's underlying database contains a column which is not present in
the Avro schema file. In this case we encounter a NullPointerException
in DescribeResultFactory.java#L189 when trying to look up the column in
the internal table object.
2) The Avro schema contains a column, which is not present in the
underlying database. In this case the column will not be displayed in
describe formatted.
In addition to the automatic tests I verified this manually by creating
an Avro table with an external schema file in Hive. This populated the
underlying database with the column information. I then either removed
a column from the Avro schema file (case 1) or cleared the column
information from the "COLUMNS_V2" table in the underlying database
(case 2) and verified that the change fixed both cases.
Change-Id: Ieb69d3678e662465d40aee80ba23132ea13871a0
Reviewed-on: http://gerrit.cloudera.org:8080/4126
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Internal Jenkins
Reviewed-by: Jim Apple <jbapple@cloudera.com>
Logging file or table data is a bad idea, and doing it by default is
particularly bad. This patch changes HdfsScanNode::LogRowParseError() to
log a file and offset only.
Testing: See rewritten tests.
To support testing this change, we also fix IMPALA-3895, by introducing
a canonical string __HDFS_FILENAME__ that all Hadoop filenames in the ERROR
output are replaced with before comparing with the expected
results. This fixes a number of issues with the old way of matching
filenames which purported to be a regex, but really wasn't. In
particular, we can now match the rest of an ERROR line after the
filename, which was not possible before.
In some cases, we don't want to substitute filenames because the ERROR
output is looking for a very specific output. In that case we can write:
$NAMENODE/<filename>
and this patch will not perform _any_ filename substitutions on ERROR
sections that contain the $NAMENODE string.
Finally, this patch fixes a bug where a test that had an ERRORS section
but no RESULTS section would silently pass without testing anything.
Change-Id: I5a604f8784a9ff7b4bf878f82ee7f56697df3272
Reviewed-on: http://gerrit.cloudera.org:8080/4020
Reviewed-by: Henry Robinson <henry@cloudera.com>
Tested-by: Internal Jenkins
The bug was also fixed by the fix for IMPALA-3063:
532b1fe118
This patch adds a regression test for IMPALA-2540.
Change-Id: I7c7dececfee90540fe7d5f8a606381ec50a3b241
Reviewed-on: http://gerrit.cloudera.org:8080/4071
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Fixed StringToFloatInternal() not to parse strings like "1.23inf"
and "infinite" with leading/trailing garbage as Infinity. These
strings are now rejected with PARSE_FAILURE.
Only "inf" and "infinity" are accepted, parsing is case-insensitive.
"NaN" values are handled similarly: strings with leading/trailing
garbage like "nana" are rejected, parsing is case-insensitive.
Other changes:
- StringToFloatInternal() was cleaned up a bit. Parsing inf and NaN
strings was moved out of the main loop.
- Use std::numeric_limits<T>::infinity() instead of INFINITY macro
and std::numeric_limits<T>::quiet_NaN() instead of NAN macro.
- Fixed another minor bug: multiple dots are allowed when parsing
float values (e.g. "2.1..6" is interpreted as 2.16).
- New BE and E2E tests were added.
Change-Id: I9e17d0f051b300a22a520ce34e276c2d4460d35e
Reviewed-on: http://gerrit.cloudera.org:8080/3791
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
I wrote these tests for my IMPALA-3987 patch, but other issues block
that optimisations. These tests exercise an interesting corner case
so I split them out into a separate patch.
The functional tests exercise every join mode for nested loop join and
hash join with an empty build side. The perf test exercises hash join
with an empty build side.
Testing:
Made sure the tests passed with both partitioned and non-partitioned
hash join implementations. Ran the targeted perf query through the
single node perf run script to make sure it worked.
Change-Id: I0a68cafec32011a47c569b254979601237e7f2a5
Reviewed-on: http://gerrit.cloudera.org:8080/4051
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
Testing: Ran the FE planner tests. Examined all the changed plans
to verify that the changes are benefitial according to our
cardinality estimates. Still need to do a real perf run.
Change-Id: I8ba903f1df2446350cca7e71fdb13f550bf9de72
Reviewed-on: http://gerrit.cloudera.org:8080/4035
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
As of Kudu 0.9, DISTRIBUTE BY is now required when creating
a new Kudu table. Create table analysis, data loading, and
tests are updated to reflect this.
This also bumps the Kudu version to 0.10.0.
Change-Id: Ieb15110b10b28ef6dd8ec136c2522b5f44dca43e
Reviewed-on: http://gerrit.cloudera.org:8080/3987
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
Before this change joins were inverted while doing join ordering.
That approach was unnecessarily complex because it required
modifying the global analysis state for correct conjunct
placement, etc. However, join inversion is independent of join
ordering, and the existing approach could lead to generating
invalid plans with distributed non-equi right outer/semi joins,
which we cannot execute in the backend.
After this change joins are inverted in a separate pass over
the single-node plan. This simplifies the inversion
logic and allows us to avoid generating those invalid plans.
Note that this change is not only a separation of functionality
for the following reasons:
1. Our join cardinality estimation is not symmetric, i.e., A JOIN B
may not give the same estimate as B JOIN A due to our FK/PK detection
heuristic. In the context of this patch this means that an inverted
join may have a different cardinality estimate, so plans may change
depending on whether the inversion is done during join ordering of after.
2. We currently only invert outer/semi/anti joins based on the rhs table
ref join op. In this patch I want to preserve the existing behavior as
much as possible, but when doing the join ordering in a separate pass we
may see a join opn in a JoinNode that is different from the rhs table ref.
So in some situations the inversion behavior based on the join op could be
different and there are some examples in this patch.
This patch also moves the logic of converting hash joins to
nested-loop joins into a separate pass over the single-node plan.
Change-Id: If86db7753fc585bb4c69612745ec0103278888a4
Reviewed-on: http://gerrit.cloudera.org:8080/3846
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
This adds a test that performs some simple fuzz testing of HDFS
scanners. It creates a copy of a given HDFS table, with each
file in the table corrupted in a random way: either a single
byte is set to a random value, or the file is truncated to a
random length. It then runs a query that scans the whole table
with several different batch_size settings. I made some effort
to make the failures reproducible by explicitly seeding the
random number generator, and providing a mechanism to override
the seed.
The fuzzer has found crashes resulting from corrupted or truncated
input files for RCFile, SequenceFile, Parquet, and Text LZO so far.
Avro only had a small buffer read overrun detected by ASAN.
Includes fixes for Parquet crashes found by the fuzzer, a small
buffer overrun in Avro, and a DCHECK in MemPool.
Initially it is only enabled for Avro, Parquet, and uncompressed
text. As follow-up work we should fix the bugs in the other scanners
and enable the test for them.
We also don't implement abort_on_error=0 correctly in Parquet:
for some file formats, corrupt headers result in the query being
aborted, so an exception will xfail the test.
Testing:
Ran the test with exploration_strategy=exhaustive in a loop locally
with both DEBUG and ASAN builds for a couple of days over a weekend.
Also ran exhaustive private build.
Change-Id: I50cf43195a7c582caa02c85ae400ea2256fa3a3b
Reviewed-on: http://gerrit.cloudera.org:8080/3833
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
The bug: During join ordering we rely on the column stats of
join predicates for estimating the join cardinality. We have code
that tries to find the stats of a column through views but there
was a bug in identifying slots that belong to base table scans.
The bug lead us to incorrectly accept slots of view references
which do not have stats.
This patch fixes the above issue and adds new test infrastructure
for creating test-local views. It adds a TPCH-equivalent database that
contains views of the form "select * from tpch_basetbl" for all TPCH
tables and add tests the plans of all TPCH queries on the view database.
Change-Id: Ie3b62a5e7e7d0e84850749108c13991647cedce6
Reviewed-on: http://gerrit.cloudera.org:8080/3865
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
The bug: Our BetweenPredicate has a complex object structure that is unlike
most other Exprs because we generate an equivalent CompoundPredicate during
analysis and replace the original children. Keeping the various members in
sync and preserving the object structure during clone() and substitute() is
very difficult and error prone. In particular, subquery rewriting is
difficult because we extract and replace correlated BinaryPredicates.
Substituting BinaryPredicates in a BetweenPredicate's children is not
equivalent to a substitution on the BetweenPredicat's original children,
so keeping the various redundant members in sync is quite difficult.
The fix is to replace BetweenPredicates with their equivalent CompoundPredicates
before performing subquery rewrites.
We ultimately still want to fix clone() and substitute() for BetweenPredicates,
but an elegant solution is likely to more involved.
Change-Id: I0838b30444ed9704ce6a058d30718a24caa7444a
Reviewed-on: http://gerrit.cloudera.org:8080/3804
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Currently the only way to refresh metadata for a partition was to refresh
the whole table. This is a relatively time consuming process especially if
there are many partitions and only one is to be refreshed.
This patch allows the client to REFRESH on a single partition by using the
following syntax:
REFRESH [database_name.]table_name PARTITION (partition_spec)
Testing:
Added parsing and authorization tests in ParserTest.java and
AuthorizationTest.java respectively. A new test file
"test_refresh_partition.py" was added for testing functionality.
Performance:
For a table with 10000 partitions and 1 file per partition
execResetMetadata() Total Execution Time
Refresh Table 3795 ms 4630 ms
Refersh Partition 42 ms 680 ms
We see that the time to refresh improves by a factor of 90x but due to
significant overhead of about 640ms in this case the effective improvement
is about 7x. As the size of the table and number of partitions increase,
this improvement would be more significant.
Change-Id: Ia9aa25d190ada367fbebaca47ae8b2cafbea16fb
Reviewed-on: http://gerrit.cloudera.org:8080/3813
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Internal Jenkins
This commit adds a set of planner tests for Kudu tables based on the 22
TPC-H queries.
Change-Id: I6c40534b72b9aa1ee582b9679c2a63cad52df703
Reviewed-on: http://gerrit.cloudera.org:8080/3790
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Internal Jenkins
With this commit we enable loading of TPC-H data in Kudu tables and
running the 22 TPC-H queries against Kudu. Since Kudu doesn't support
the decimal data type, we had to modify the queries by using round()
function and update the test results.
Change-Id: I3a5de71fefa92a78970226d8f49ef445d28f9289
Reviewed-on: http://gerrit.cloudera.org:8080/3789
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: Internal Jenkins
The bug: For correct predicate assignment we rely on TableRef.getAllTupleIds()
and TableRef.getMaterializedTupleIds(). The implementation of those functions
used to traverse the chain of table refs and collect the appropriate ids.
However, during plan generation we alter the chain of table refs, in particular,
for dealing with nested collections, so those altered TableRefs do not return the
expected list of ids, leading to wrong decisions in predicate assignment.
The fix: Cache the lists of ids during analysis, so we are free to alter the
chain of TableRefs during plan generation.
Change-Id: I298b8695c9f26644a395ca9f0e86040e3f5f3846
Reviewed-on: http://gerrit.cloudera.org:8080/2415
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Previously, each fragment using dynamic code generation will
parse the bitcode module and populate the LLVM data structures
for all the functions and their bodies in the bitcode module.
This is wasteful as we may only use a few functions out of all
the functions parsed. We rely on dead code elimination to
delete most of the unused functions so we won't waste time
compiling them.
This change implements lazy materialization of the functions'
bodies. On the initial parse of the bitcode module, we just
create the Function objects for each function in the module.
The functions' bodies will be materialized on demand from the
bitcode module when they are actually referenced in the query.
This ensures that the prepare time during codegen is proportional
to the number of IR functions referenced by the query instead
of being proportional to the total number of IR functions in
the module.
This change also stops cross-compiling BufferedTupleStream::GetTupleRow()
as there isn't much benefit for doing it. In addition, move the ctors
and dtors of LikePredicate to the header file to avoid an unnecessary
alias in the IR module.
For TPCH-Q2, a fragment which only codegen 9 functions used to spend
146ms in codegen. It now goes down to 35ms, a 76% reduction.
CodeGen:(Total: 146.041ms, non-child: 146.041ms, % non-child: 100.00%)
- CodegenTime: 0.000ns
- CompileTime: 2.003ms
- LoadTime: 0.000ns
- ModuleBitcodeSize: 2.12 MB (2225304)
- NumFunctions: 9 (9)
- NumInstructions: 129 (129)
- OptimizationTime: 29.019ms
- PrepareTime: 114.651ms
CodeGen:(Total: 35.288ms, non-child: 35.288ms, % non-child: 100.00%)
- CodegenTime: 0.000ns
- CompileTime: 1.880ms
- LoadTime: 0.000ns
- ModuleBitcodeSize: 2.12 MB (2221276)
- NumFunctions: 9 (9)
- NumInstructions: 129 (129)
- OptimizationTime: 5.101ms
- PrepareTime: 28.044ms
Change-Id: I6ed7862fc5e86005ecea83fa2ceb489e737d66b2
Reviewed-on: http://gerrit.cloudera.org:8080/3220
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
Also fix a stale comment in the avro scanner header.
The main work here is to fix the handling of empty result sets in the
test result verifier. This is a problem because we wanted to verify
that the results in the test file were a superset of the rows
returned, and this was thrown off by superflous '' rows in the expected
and actual result sets.
The basic problem is that the way test file sections
was parsed conflated an empty result section with non-empty result
section that had a single empty string. I.e.:
---- RESULTS
====
vs
---- RESULTS
====
both got resolved to [''].
Change-Id: Ia007e558d92c7e4ce30be90446fdbb1f50a0ebc4
Reviewed-on: http://gerrit.cloudera.org:8080/3413
Tested-by: Internal Jenkins
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Added support for the 'ignore nulls' keyword to the last_value and
first_value analytic functions, eg. 'last_value(col ignore nulls)',
which would return the last value from the window that is not null,
or null if all of the values in the window are null.
We handle 'ignore nulls' in the FE in the same way that we handle
'distinct' - by adding isIgnoreNulls as a field in FunctionParams.
To avoid affecting performance when 'ignore nulls' is not used, and
to avoid having to special case 'ignore nulls' on the backend, this
patch adds 'last_value_ignore_nulls' and 'first_value_ignore_nulls'
builtin analytic functions that wrap 'last_value' and 'first_value'
respectively.
Change-Id: Ic27525e2237fb54318549d2674f1610884208e9b
Reviewed-on: http://gerrit.cloudera.org:8080/3328
Reviewed-by: Thomas Tauber-Marshall <tmarshall@cloudera.com>
Tested-by: Internal Jenkins
There were two separate issues:
First, the SortNode incorrectly picked up unassigned conjuncts, and expected those to
be empty. In this case where predicates are migrated into union operands, there could
actually be unassigned conjuncts bound by the SortNode's tuple id (and so would be
incorrectly picked up). The fix is to not pick up unassigned conjuncts in the SortNode,
and allow them to be picked up later (into a SelectNode).
Second, when generating the plan for union operands we were missing a call to
graft a SelectNode on top of the operand plan to capture unassigned conjuncts.
Change-Id: I95d105ac15a3dc975e52dfd418890e13f912dfce
Reviewed-on: http://gerrit.cloudera.org:8080/3600
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Alex Behm <alex.behm@cloudera.com>
FunctionContext::Allocate() and FunctionContext::AllocateLocal()
used to return NULL for zero length allocations. This makes
it hard to distinguish between allocation failures and zero
length allocations. Such confusion may lead to DCHECK failure
in the macro RETURN_IF_NULL() in debug builds or access to NULL
pointers in non-debug builds.
This change fixes the problem above by returning NULL only if
there is allocation failure. Zero-length allocations will always
return a dummy non-NULL pointer.
Change-Id: Id8c3211f4d9417f44b8018ccc58ae182682693da
Reviewed-on: http://gerrit.cloudera.org:8080/3601
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
This change adds the missing switch statement in
CodegenReadScalar() for AVRO_DECIMAL so that we will
also codegen if an avro table contains AVRO_DECIMAL.
With this change, the following query improves by 37.5%,
going from 8s to 5s:
select count(distinct l_linenumber), avg(l_extendedprice), max(l_discount), min(l_tax) from tpch15_avro.lineitem;
This change also un-inlines BitUtil::ByteSwap() as the
third argument 'len' is not compilation constant for
all call sites.
Change-Id: I51adf0c1ba76e055f31ccb0034a0d23ea2afb30e
Reviewed-on: http://gerrit.cloudera.org:8080/3489
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
Since it is possible to create an Avro table with both column
definitions and an Avro schema, Impala attempts to reconcile
inconsistencies in the two schema definitions, generally preferring the
Avro schema. The only exception to this rule was with
CHAR/VARCHAR/STRING columns, where the column definition was preferred
in order to support tables with CHAR/VARCHAR columns although Avro only
supports STRING. This exception is confusing because the name for such a
column will be taken from the column definition (and not from the Avro
schema).
This patch prefers name, comment from Avro schema definition and
uses column type from column definition for CHAR/VARCHAR/STRING
columns.
Change-Id: Ia3e43b2885853c2b4f207a45a873c9d7f31379cd
Reviewed-on: http://gerrit.cloudera.org:8080/3331
Reviewed-by: Huaisi Xu <hxu@cloudera.com>
Tested-by: Internal Jenkins
Previously all code paths using getDbsMetadata() sufferred
unnecessary privilege checks:
1. Impala checked privilege of all databases, tables before
applying user provided JDBC pattern filters.
2. Impala passed a null pattern to getDbsMetadata() when
user did not provide one. However, null pattern is treated
as "%", which matches everything thereby causing unnecessary
privilege checks for catalog objects that are not in the
result set.
This patch creates PatternMatcher early so that user specified
null pattern is respected when calling getDbsMetadata().
Change-Id: I17d8c5b9fb12483e4b01b819fba48b6849311a14
Reviewed-on: http://gerrit.cloudera.org:8080/3371
Reviewed-by: Huaisi Xu <hxu@cloudera.com>
Tested-by: Huaisi Xu <hxu@cloudera.com>
The test could hit one of two similar errors, depending on which order
it read the files. This patch fixes the ERROR/CATCH blocks to be more
permissive, so that either error is accepted.
Change-Id: I785048eda36552981b6ba9c739517f83ac8715f4
Reviewed-on: http://gerrit.cloudera.org:8080/3402
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
Added checks/error handling:
* Negative string lengths while decoding dictionary or data page.
* Buffer overruns while decoding dictionary or data page.
* Some metadata FILECHECKs were converted to statuses.
Testing:
Unit tests for:
* decoding of strings with negative lengths
* truncation of all parquet types
* dictionary creation correctly handling error returns from Decode().
End-to-end tests for handling of negative string lengths in
dictionary- and plain-encoded data in corrupt files, and for
handling of buffer overruns for string data. The corrupted
parquet files were generated by hacking Impala's parquet
writer to write invalid lengths, and by hacking it to
write plain-encoded data instead of dictionary-encoded
data by default.
Performance:
set num_nodes=1;
set num_scanner_threads=1;
select * from biglineitem where l_orderkey = -1;
I inspected MaterializeTupleTime. Before the average was 8.24s and after
was 8.36s (a 1.4% slowdown, within the standard deviation of 1.8%).
Change-Id: Id565a2ccb7b82f9f92cc3b07f05642a3a835bece
Reviewed-on: http://gerrit.cloudera.org:8080/3387
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
This patch adds error checking to the Avro scanner (both the codegen'd
and interepted paths), including out-of-bounds checks and data
validity checks.
I ran a local benchmark using the following queries:
set num_scanner_threads=1;
select count(i) from default.avro_bigints_big; # file contains only longs
select max(l_orderkey) from biglineitem_avro; # file has tpch.lineitem schema
Both benchmark queries see negligable or no performance impact.
This patch adds a new Avro scanner unit test and an end-to-end test
that queries several corrupted files, as well as updates the zig-zag
varlen int unit test.
Change-Id: I801a11c496a128e02c564c2a9c44baa5a97be132
Reviewed-on: http://gerrit.cloudera.org:8080/3072
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
This is the first in a series of patches to clean up test_ddl.py
Summary of changes:
- Break up test_create() and corresponding .test files into:
* test_create_database()
* test_create_table()
* test_create_table_like_table()
* test_create_table_like_file()
* test_create_table_as_select()
- Merge test_nested() into the tests above
- Move a test into test_hms_integration.py
- Add a new test_ddl_base.py as base class for DDL tests.
The plan is to split up test_ddl.py into several smaller
.py files in subsequent patches.
Testing: I tested test_ddl.py and test_hms_integration.py on
exhaustive locally as well as in private builds on all filesystems.
Change-Id: I5f4c044d39e165c2535961b8d0a765c8dbbd051c
Reviewed-on: http://gerrit.cloudera.org:8080/3044
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Alex Behm <alex.behm@cloudera.com>
Adds handling and testing for a specific Parquet data corruption
scenario with plain dictionary encoded values.
The problematic scenario is when the repeat or literal count of
the RLE-encoded dictionary indexes is decoded as 0 - an invalid value.
There are several other cases of data corruption that are not yet
handled gracefully. This patch only handles one specific case.
Change-Id: Ibf406c82cdded37966f09c81e4cc1446d2b60d63
Reviewed-on: http://gerrit.cloudera.org:8080/3299
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Alex Behm <alex.behm@cloudera.com>
The Sorter's memory management logic failed to correctly manage buffers
when spilling. It would try to make use of all buffers in the system,
neglecting to account for other operators' buffer usage.
This patch adjusts the logic so that it handles contention for buffers
so long as it can get enough buffers to make progress. Instead of
precalculating the number of buffers it thinks it should be able to
pin, it just makes a best-effort attempt to pin the initial buffers
as many runs as possible, up to a limit. As long as it can pin three
runs, it can make progress.
Testing:
Added an additional test that failed before the patch without OOM.
An analytic function test that was meant to fail also started succeeding
so I had to adjust the limit there too.
Change-Id: Idfe55cc13c7f2b54cba1d05ade44cbcf6bb573c0
Reviewed-on: http://gerrit.cloudera.org:8080/2908
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Tim Armstrong <tarmstrong@cloudera.com>
Clarify relationships between classes, clean up the previous mess
where every class was friends with the other so there's an actual
distinction between public and private members. TupleIterator
is now no longer tied to TupleSorter, just Run.
Document and enforce invariants in many cases.
Factor out some functions from large functions.
Simplify and document iterator logic.
Make management of buffers when iterating over output stream more
explicitly correct: either use MarkNeedToReturn() or attach block
to the batch as appropriate. The SortedRunMerger didn't handle
resource transfer correctly, except if all the memory came from
the batch's MemPool. This patch fixes the cases when resources
are attached to the batches, but not the 'need_to_return' case.
Document that SortedRunMerger requires 'deep_copy_input' to be true
if batches can have the 'need_to_return' flag set.
Also use the atomic block exchange operation when moving between
blocks in unpinned runs to prevent pin failures at that point.
I explicitly have avoided changing the hairy block management logic
when allocating buffers for merging, that will need addressing in
a follow-up patch.
Add a SpilledRuns counter so that it's more explicit that spilling
occurred.
Testing:
Added some tests for corner cases with empty and NULL strings.
Fixed a test that previously failed with OOM but now succeeds.
Performance:
Benchmarking against old code initial revealed some regressions from
changes in inlining. Force inlining the TupleComparator::operator() and
iterator Next()/Prev() functions helped and performance seems similar or
slightly better on the targeted orderby benchmarks.
Change-Id: I9c619e81fd1b8ac50e257172c8bce101a112b52a
Reviewed-on: http://gerrit.cloudera.org:8080/2826
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Tim Armstrong <tarmstrong@cloudera.com>
With the prefetching changes, the probe expressions' local
allocations are no longer freed via QueryMaintenance() in
PHJ. Instead, they are freed explicitly in GetNext() after
an entire probe batch has been processed. Due to this
change in how we handle local allocations of probe expressions,
a DCHECK was added to verify that there is no local allocation
from the probe expression in ProcessBuildInput(). Turns out that
Expr::Open() called in ConstructBuildSide() on the probe
expressions may have caused local allocations to occur for
certain UDFs (e.g. extract()).
This change handles the situation above by freeing local
allocations of the probe expressions once before calling
ProcessBuildInput() in ConstructBuildSide(). A new regression
test is also added for this specific case.
Change-Id: I2096ca3e2093c5ab0ecc0e7ca4cd1b5f3c1ed1ed
Reviewed-on: http://gerrit.cloudera.org:8080/3253
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
This patch adds test coverage for partitioned inserts where the memory
limit will be exceeded by the table writer.
Testing:
Ran the test with exploration_strategy=exhaustive locally then ran an exhaustive
private build. Manually inspected the memory limit report to make sure
that it was behaving as expected (writer memory was being correctly
tracked, etc).
Change-Id: I8583c60d648af9eedc956315df5ac3c3d6608704
Reviewed-on: http://gerrit.cloudera.org:8080/3245
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
This change ensures that Avro tables created without column definitions
remain queryable if columns are added via ALTER TABLE. The bug was that
when synthesizing an Avro schema from the column definitions we used to
not add default values.
Change-Id: Ib86e9ba1f4329b285ae14ee299365f7291a7410e
Reviewed-on: http://gerrit.cloudera.org:8080/3219
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins