Logging file or table data is a bad idea, and doing it by default is
particularly bad. This patch changes HdfsScanNode::LogRowParseError() to
log a file and offset only.
Testing: See rewritten tests.
To support testing this change, we also fix IMPALA-3895, by introducing
a canonical string __HDFS_FILENAME__ that all Hadoop filenames in the ERROR
output are replaced with before comparing with the expected
results. This fixes a number of issues with the old way of matching
filenames which purported to be a regex, but really wasn't. In
particular, we can now match the rest of an ERROR line after the
filename, which was not possible before.
In some cases, we don't want to substitute filenames because the ERROR
output is looking for a very specific output. In that case we can write:
$NAMENODE/<filename>
and this patch will not perform _any_ filename substitutions on ERROR
sections that contain the $NAMENODE string.
Finally, this patch fixes a bug where a test that had an ERRORS section
but no RESULTS section would silently pass without testing anything.
Change-Id: I5a604f8784a9ff7b4bf878f82ee7f56697df3272
Reviewed-on: http://gerrit.cloudera.org:8080/4020
Reviewed-by: Henry Robinson <henry@cloudera.com>
Tested-by: Internal Jenkins
Added checks/error handling:
* Negative string lengths while decoding dictionary or data page.
* Buffer overruns while decoding dictionary or data page.
* Some metadata FILECHECKs were converted to statuses.
Testing:
Unit tests for:
* decoding of strings with negative lengths
* truncation of all parquet types
* dictionary creation correctly handling error returns from Decode().
End-to-end tests for handling of negative string lengths in
dictionary- and plain-encoded data in corrupt files, and for
handling of buffer overruns for string data. The corrupted
parquet files were generated by hacking Impala's parquet
writer to write invalid lengths, and by hacking it to
write plain-encoded data instead of dictionary-encoded
data by default.
Performance:
set num_nodes=1;
set num_scanner_threads=1;
select * from biglineitem where l_orderkey = -1;
I inspected MaterializeTupleTime. Before the average was 8.24s and after
was 8.36s (a 1.4% slowdown, within the standard deviation of 1.8%).
Change-Id: Id565a2ccb7b82f9f92cc3b07f05642a3a835bece
Reviewed-on: http://gerrit.cloudera.org:8080/3387
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
This change is a first step towards a more efficient Parquet scanner.
The focus is on presenting the new code flow that materializes
the table-level slots in a column-wise fashion, without going deep
into actually improving scan efficieny.
After these changes there are several obvious places that should
be optimized to realize efficiency gains.
Summary of changes
- the table-level tuples are materialized in a column-wise fashion
with new ColumnReader::ReadValueBatch() functions
- this is done by materializing a 'scratch' batch, and transferring
scratch tuples that survive filters/conjuncts to the output batch
- the tuples of nested collections are still materialized in
a row-wise fashion using the ColumnReader::ReadValue() function,
just as before
Mini benchmark
I ran the following queries on a single impalad before and after my
change using a synthetic 'huge_lineitem' table.
I modified hdfs-scan-node.cc to set the number of rows of any row
batch to 0 to focus the measurement on the scan time.
Query options:
set num_scanner_threads=1;
set disable_codegen=true;
set num_nodes=1;
select * from huge_lineitem;
Before: 22.39s
Afer: 18.50s
select * from huge_lineitem where l_linenumber < 0;
Before: 25.11s
After: 20.56s
select * from huge_lineitem where l_linenumber % 2 = 0;
Before: 26.32s
After: 21.82s
Change-Id: I72a613fa805c542e39df20588fb25c57b5f139aa
Reviewed-on: http://gerrit.cloudera.org:8080/2779
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
There was an incorrect DCHECK in the parquet scanner. If abort_on_error
is false, the intended behaviour is to skip to the next row group, but
the DCHECK assumed that execution should have aborted if a parse error
was encountered.
This also:
- Fixes a DCHECK after an empty row group. InitColumns() would try to
create empty scan ranges for the column readers.
- Uses metadata_range_->file() instead of stream_->filename() in the
scanner. InitColumns() was using stream_->filename() in error
messages, which used to work but now stream_ is set to NULL before
calling InitColumns().
Change-Id: I8e29e4c0c268c119e1583f16bd6cf7cd59591701
Reviewed-on: http://gerrit.cloudera.org:8080/1257
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins