Commit Graph

102 Commits

Author SHA1 Message Date
Matthew Jacobs
f413e236a8 IMPALA-3579: Strict handling of numeric overflow in text parsing
Adds a query option 'strict_mode' which treats integer and
floating pt overflows as parse errors. In the past,
overflows were ignored and the max value was returned. When
this query option is set, overflowing values are treated as if
they were completely invalid data, i.e. NULL is returned.
When abort_on_error is enabled, this means the query is
aborted.

Notes:
* DECIMAL overflow/underflow is already treated as an error.
* The handling in text-converter treats underflows the same
  as overflows, so they would result in the same behavior.
  However, floating point parsing never returns an underflow
  today.
* We may also want to handle numeric values that are truncated
  when parsing to integer types, e.g. 10.5 -> 10.

Change-Id: I7409c31ec0cb6fe0b2d9842b9f58fe1670914836
Reviewed-on: http://gerrit.cloudera.org:8080/3150
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
2016-05-23 08:40:20 -07:00
Lars Volker
a09c80a33e IMPALA-3458: Fix table creation to test insert with header lines
For IMPALA-1740 we added a test to insert.test, which creates a table and
inserts data. The table was created on HDFS by default and thus inserts with
compression enabled did not work. This change adds the required table to the
functional schema in the same way we do it for the other insert tests.

Change-Id: Ie68e7067b7a16218d27935820d5d1ce7035d2e6c
Reviewed-on: http://gerrit.cloudera.org:8080/2919
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
2016-05-12 14:17:47 -07:00
Lars Volker
b5570da405 IMPALA-1740: Add support for skip.header.line.count.
HIVE-5795 introduced a parameter skip.header.line.count to skip header
lines from input files. This change introduces the capability to skip
an arbitrary number of header lines from csv input files on hdfs. The
size of the total file header must be smaller than
max_scan_range_length, otherwise an error will be reported. This is
necessary because scan ranges are not read in disk order, so there is
no way of identifying header lines except by counting from the start
of the first scan range.

[localhost:21000] > alter table t1 set
tblproperties('skip.header.line.count'='1');
Query: alter table t1 set tblproperties('skip.header.line.count'='1')
[localhost:21000] > select * from t1;
Query: select * from t1
+----+----+
| c1 | c2 |
+----+----+
| 1  | 1  |
| 2  | 2  |
| 3  | 3  |
+----+----+
Fetched 3 row(s) in 0.32s
[localhost:21000] > alter table t1 set
tblproperties('skip.header.line.count'='0');
Query: alter table t1 set tblproperties('skip.header.line.count'='0')
[localhost:21000] > select * from t1;
Query: select * from t1
+------+------+
| c1   | c2   |
+------+------+
| NULL | NULL |
| 1    | 1    |
| 2    | 2    |
| 3    | 3    |
+------+------+
WARNINGS: Error converting column: 0 TO INT (Data is: num1)
Error converting column: 1 TO DOUBLE (Data is: num2)
file: hdfs://localhost:20500/test-warehouse/t1/test.txt
record: num1,num2

Fetched 4 row(s) in 0.41s

Change-Id: I595f01a165d41499ca1956fe748ba3840a6eb543
Reviewed-on: http://gerrit.cloudera.org:8080/2110
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Internal Jenkins
2016-05-12 14:17:46 -07:00
Skye Wanderman-Milne
2cbd327d41 Regenerate complextypestbl files to include nested_struct.g field
This field was included in the schema and data files, but the
checked-in generated parquet files didn't include it. It's not
referenced in any tests so we didn't catch it.

Change-Id: I5d394f074e7082fa12fafb7e57a144a83b3099a6
Reviewed-on: http://gerrit.cloudera.org:8080/2562
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Internal Jenkins
2016-04-01 05:06:38 +00:00
Sailesh Mukil
76b674850f IMPALA-2466: Add more tests for the HDFS parquet scanner.
These tests functionally test whether the following type of files
are able to be scanned properly:

1) Add a parquet file with multiple blocks such that each node has to
   scan multiple blocks.
2) Add a parquet file with multiple blocks but only one row group
   that spans the entire file. Only one scan range should do any work
   in this case.

Change-Id: I4faccd9ce3fad42402652c8f17d4e7aa3d593368
Reviewed-on: http://gerrit.cloudera.org:8080/1500
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
2016-03-25 13:10:15 +00:00
David Alves
7381304a23 Merge branch 'feature/kudu' into cdh5-trunk
This is the final merge commit that merges the 'feature/kudu' branch
into cdh5-trunk.

Change-Id: Ib3dfb4fc7a69c5cb1c5789422ee52fa192ed677a
2016-03-13 19:28:43 -07:00
David Alves
82222abaf5 Merge branch 'feature/kudu' into cdh5-trunk
This merges the 'feature/kudu' branch with cdh5-trunk as of commit:
055500cc753f87f6d1c70627321fcc825044e183

This patch is not a pure merge patch in the sense that goes beyond conflict
resolution to also address reviews to the 'feature/kudu' branch as a whole.

The review items and their resolution can be inspected at:
http://gerrit.cloudera.org:8080/#/c/1403/

Change-Id: I6dd4270cd17a4f5c02811c343726db3504275a92
2016-03-11 11:37:58 -08:00
Juan Yu
c9b33ddf63 IMPALA-1886/IMPALA-2154: Add support for multi-stream bz2/gzip compressed files.
Fix a bug in which Impala only reads the first stream
of a multi-stream bz2/gzip file.
Changes the bz2 decoder to read the file in a streaming
fashion rather than reading the entire file into memory
before it can be decompressed.

Change-Id: Icbe617d03a69953f0bf3aa0f7c30d34bc612f9f8
(cherry picked from commit b6d0b4e059329633dc50f1f73ebe35b7ac317a8e)
Reviewed-on: http://gerrit.cloudera.org:8080/2219
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
2016-02-28 21:31:37 -08:00
Alex Behm
c6fd5a0fe4 IMPALA-2844: Allow count(*) on RC files with complex types.
This patch also fixes the incorrect error message reported
in the JIRA.

Change-Id: I2c7b732767d154c36bc7189df5177d27a35d0d7b
Reviewed-on: http://gerrit.cloudera.org:8080/2267
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2016-02-22 20:16:24 -08:00
Alex Behm
db90f9954e Fix OOM when data loading with the new Hive SNAPSHOT.
This patch is required for upgrading thirdparty.

After upgrading thirdparty, Hive OOMs in data loading TPCDS.
This patch fixes the OOM, but I have no idea why exactly it works.
I had tried several other approaches like increasing the
mapred and JVM heap memory to no avail.

Change-Id: I3c69e4c3bf0f24e0c3b6272c946f71c17e312e01
Reviewed-on: http://gerrit.cloudera.org:8080/2145
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2016-02-14 07:08:22 +00:00
Alex Behm
1a4a830a6d IMPALA-2776: Remove escapechartesttable and associated tests.
The original purpose of the escapechartesttable was to test
Impala's behavior on text tables that have the same character
as line terminator and escape character. Recent changes in
Hive have made creating such a table impossible because
1) Only newline is allowed as the line terminator
2) Newline is forbidden as the escape character

See HIVE-11785 for details on the Hive changes.

This commit removes escapechartesttable and all associated
tests, but does not add the same enforcement rules as Hive.
These enforcement rules should be added in a follow-on change.

Change-Id: I2bd9755f4c2cc3d7dfd8d67c3759885951550f08
Reviewed-on: http://gerrit.cloudera.org:8080/1690
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Alex Behm <alex.behm@cloudera.com>
2016-01-05 06:04:41 +00:00
Skye Wanderman-Milne
dd2eb951d7 IMPALA-2558: DCHECK in parquet scanner after block read error
There was an incorrect DCHECK in the parquet scanner. If abort_on_error
is false, the intended behaviour is to skip to the next row group, but
the DCHECK assumed that execution should have aborted if a parse error
was encountered.

This also:
- Fixes a DCHECK after an empty row group. InitColumns() would try to
  create empty scan ranges for the column readers.
- Uses metadata_range_->file() instead of stream_->filename() in the
  scanner. InitColumns() was using stream_->filename() in error
  messages, which used to work but now stream_ is set to NULL before
  calling InitColumns().

Change-Id: I8e29e4c0c268c119e1583f16bd6cf7cd59591701
Reviewed-on: http://gerrit.cloudera.org:8080/1257
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
2015-10-30 22:35:57 +00:00
Matthew Jacobs
1cd95f79f0 Add partitions to table_no_newline_part IF NOT EXISTS
Loading functional data may fail if table_no_newline_part
partitions already exist. We should only ADD with IF NOT
EXISTS to handle this case as we do with other tables.

Change-Id: I5fe5c318d2cbbd5b5419394212b94a2fe7d386ce
Reviewed-on: http://gerrit.cloudera.org:8080/1261
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
2015-10-17 00:52:53 +00:00
Sailesh Mukil
277a92a14a IMPALA-2479: Failure in TestParquet.test_verify_runtime_profile
The test_verify_runtime_profile test failed during C5.5 builds and
GVMs because this test relies on the table lineitem_multiblock to
have 3 blocks. However, due to the rules to load the data not being
followed in the functional_schema_template.sql file, the table ended
up being stored with only one block.

This change moves the data load to the end of create-load-data.sh
file which would load the data even for snapshots.

Change-Id: I78030dd390d2453230c4b7b581ae33004dbf71be
Reviewed-on: http://gerrit.cloudera.org:8080/1153
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
2015-10-08 15:16:35 -07:00
Sailesh Mukil
7778b3ded5 IMPALA-1881: Maximize data locality when scanning Parquet files with multiple row groups.
Impala supports reading Parquet files with multiple row groups but
with possible performance degradation due to remote reads. This patch
maximizes scan locality by allowing multiple impalads to scan the
rowgroups in their local splits. Each impalad starts a new scan range
for each split local to it if that split contains row group(s) that
need to be scanned.

Change-Id: Iaecc5fb8e89364780bc59dbfa9ae51d0d124d16e
Reviewed-on: http://gerrit.cloudera.org:8080/908
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
2015-10-05 11:30:39 -07:00
Martin Grund
4b102fefd2 KUDU-1115: Deprecate 'kudu.split_keys' for range partitioning
For range partitioning use DISTRIBUTE BY RANGE instead.

Change-Id: I055b605312a6be76a5439526f9c10ab4c4b432ce
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/8063
Tested-by: jenkins
Reviewed-by: Martin Grund <mgrund@cloudera.com>
2015-09-16 16:43:47 -07:00
ishaan
3278ab9ef7 IMPALA-2292: Change the type of timestamp_col to string in the table no_avro_schema.
Hive does not allow DDL when an avro table has a timestamp column. This breaks data
loading. This change simply uses a string type to denote the previous timestamp column.

Change-Id: Id4eb42da3ac8a805d12b1c3ff35b200298269084
Reviewed-on: http://gerrit.cloudera.org:8080/745
Tested-by: Internal Jenkins
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Ishaan Joshi <ishaan@cloudera.com>
2015-09-04 06:46:35 +00:00
Skye Wanderman-Milne
bcc73a36da Nested types: read and materialize nested types in Parquet scanner
This patch modifies the Parquet scanner to resolve nested schemas, and
read and materialize collection types. The high-level modification is
to create a CollectionColumnReader that recursively materializes map-
and array-type slots.

This patch also adds many tests, most of which query a new table
called complextypestbl. This table contains hand-generated data that
is meant to expose edge cases in the scanner. The tests mostly test
the scanner, with a few tests of other functionality (e.g. array
serialization).

I ran a local benchmark comparing this scanner code to the original
scanner code on an expanded version of tpch_parquet.lineitem with
48009720 rows. My benchmark involved selecting different numbers of
columns with a single scanner thread, and I looked at the HDFS scan
node time in the query profiles. This code introduces a 10%-20%
regression in single-threaded scan time.

Change-Id: Id27fb728934e8346444f61752c9278d8010e5f3a
Reviewed-on: http://gerrit.cloudera.org:8080/576
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2015-09-02 19:23:54 +00:00
Martin Grund
f927a285c6 IMPALA-1136, IMPALA-2161: Skip \u0000 characters when dealing Avro schemas
The limitation of the Avro JSON library not to handle \u0000 characters
is to avoid problems with builtin functions like strlen() that would
report wrong length when encountering such a character. Now, in the
case if Impala, for now, we don't support any Unicode characters.
This allows us to actually skip the \u0000 character instead of
interpreting it.

It is important to say that even the most recent versions of Avro do
not support parsing \u0000 characters.

Change-Id: I56dfa7f0f12979fe9705c51c751513aebce4beca
Reviewed-on: http://gerrit.cloudera.org:8080/712
Tested-by: Internal Jenkins
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
2015-09-02 00:37:28 +00:00
Alex Behm
9d46853fbc Nested Types: Check un/supported file formats for complex types.
Before this patch, we used to accept any query referencing complex
types, regardless of the table/partition's file format being scanned.
We would ultimately hit a DCHECK in the BE when attempting to scan
complex types of a table/partition with an unsupported format.

This patch makes queries fail gracefully during planning if a scan
would access a table/partition in a format for which we do not
support complex types.

For mixed-format partitioned Hdfs tables we perform this check
at the partition granularity, so such a table can be scanned as
long as only partitions with supported formats are accessed.

HBase tables with complex-typed columns can be scanned as long as
no complex-typed columns are accessed in the query.

Change-Id: I2fd2e386c9755faf2cfe326541698a7094fa0ffc
Reviewed-on: http://gerrit.cloudera.org:8080/705
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2015-09-01 03:26:53 +00:00
Alex Behm
6f0b255c5a Address several shortcomings with respect to the usability of Avro tables.
Addressed JIRAs: IMPALA-1947 and IMPALA-1813

New Feature:
Adds support for creating an Avro table without an explicit
Avro schema with the following syntax.

CREATE TABLE <table_name> column_defs STORED AS AVRO

Fixes and Improvements:
This patch fixes and unifies the logic for reconciling differences between
an Avro table's Avro Schema and its column definitions. This reconciliation
logic is executed during Impala's CREATE TABLE and when loading a table's
metadata. Impala generally performs the schema reconciliation during table
creation, but Hive does not. In many cases, Hive's CREATE TABLE stores the
original column definitions in the HMS (in the StorageDescriptor) instead
of the reconciled column definitions.

The reconciliation logic considers the field/column names and follows this
conflict resolution policy which is similar to Hive's:

Mismatched number of columns -> Prefer Avro columns.
Mismatched name/type -> Prefer Avro column, except:
  A CHAR/VARCHAR column definition maps to an Avro STRING, and is preserved
  as a CHAR/VARCHAR in the reconciled schema.

Behavior for TIMESTAMP:
A TIMESTAMP column definition maps to an Avro STRING and is presented as a STRING
in the reconciled schema, because Avro has no binary TIMESTAMP representation.
As a result, no Avro table may have a TIMESTAMP column (existing behavior).

Change-Id: I8457354568b6049b2dd2794b65fadc06e619d648
Reviewed-on: http://gerrit.cloudera.org:8080/550
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2015-08-25 09:52:18 +00:00
Alex Behm
c908ba1b7e IMPALA-1136: Support loading Avro tables without an explicit Avro schema
Hive allows creating Avro tables without an explicit Avro schema since 0.14.0.
For such tables, the Avro schema is inferred from the column definitions,
and not stored in the metadata at all (no Avro schema literal or Avro schema file).

This patch adds support for loading the metadata of such tables, although Impala
currently cannot create such tables (expect a follow-on patch).

Change-Id: I9e66921ffbeff7ce6db9619bcfb30278b571cd95
Reviewed-on: http://gerrit.cloudera.org:8080/538
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2015-07-31 12:13:37 +00:00
Ippokratis Pandis
e99c68fe52 IMPALA-2130: Wrong verification of Parquet file version
This patch corrects a mistake in the Parquet magic file number verification
and adds a test about it. Note that with this patch Impala may fail to read
Parquet files with wrong magic number that it used to read before.

Change-Id: Iff31accda1e1d541946ef1f750e38886ce4cb8d5
Reviewed-on: http://gerrit.cloudera.org:8080/515
Reviewed-by: Ippokratis Pandis <ipandis@cloudera.com>
Tested-by: Internal Jenkins
2015-07-14 02:52:02 +00:00
David Alves
8a91b6b60f Allow to specify split keys in json format in Kudu's CREATE TABLE statement
This allows to specify split keys in Kudu's CREATE TABLE statement as part
of the key/value pairs in TBLPROPERTIES.

Splits are expected to be specified in as json arrays of arrays: [[key1], [key2], ...]
'key1', 'key2 might be single values or comma separated lists of values, depending
on whether the table has a simple of compound primary key.

This also adds a series of test tables to be created for the kutu table format
when load-data.py is executed.

Change-Id: I1824199fda14abb2d7352800789f2b9c2f2124ae
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/6974
Tested-by: jenkins
Reviewed-by: Martin Grund <mgrund@cloudera.com>
2015-06-19 10:40:16 -07:00
David Alves
cf2618df1e Allow to specify a TABLE_PROPERTIES section in functional_schema_template.sql
This adds a new TABLE_PROPERTIES section that will be included in the
CREATE TABLE statement as TBLPROPERTIES. Each line in this new section
is expected to be in the form:
<file_format>:<key>=<value>

Properties are only added to create table statements of the file format
they specify.

Change-Id: I89ef7ced3351ecf2c727050ca426f6616f3e5bcd
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/6945
Tested-by: jenkins
Reviewed-by: Martin Grund <mgrund@cloudera.com>
2015-06-18 14:23:28 -07:00
Juan Yu
934b28fe5e IMPALA-1381: Expand set of supported timezones.
The hardcoded timezone information is from Java version 1.7.0_76.

Change-Id: I32c40d0036473079e5bfd4d0252a648cbb0e7c23
Reviewed-on: http://gerrit.cloudera.org:8080/393
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
2015-05-22 01:32:54 +00:00
Casey Ching
ac0c075997 Parquet: Fix value def level when max def level is 0
When running with a release build, NULL would be returned when
reading values from required fields in parquet files (with a debug
build a DCHECK would be hit).

Previously when the max definition level for a field was 0 (which
happens if a field is required), the definition level for value was
incorrectly set to 1. The max definition level is related to nested
data and is defined to be the number of nullable fields that will be
encountered when traversing a path to reach the desired end field.
For example, if a nested schema has a path a.b.c.d where b and d are
nullable then the max def level is 2. A def level is attached to each
value to indicate the number of optional values that are present (in
the previous example an def level of 2 means both b and d are not
null). So having a def level for a value that is greater than the max
def level for a field should never happen.

Change-Id: Ia91a97cf79e672c420d10416c6817f0930dcc920
(cherry picked from commit cdd67e4c7fd62d5b08adfaa303d7bb2382e6932c)
Reviewed-on: http://gerrit.cloudera.org:8080/386
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
2015-05-15 06:41:02 +00:00
Juan Yu
d1c263402e IMPALA-1973: Fixing crash when uninitialized, empty row is added in HdfsTextScanner
This patch fixes an issue when an uninitialized, empty row is falsely
added to the rowbatch. The uninitialized data inside this row leads
later on to a crash when the null byte is checked together with the
offsets (that contains garbage).

The fix is to not only check for the number of materialized columns, but
as well for the number of materialized partition key columns. Only if both are
empty and the parser has an unfinished tuple, add the empty row.

To accommodate for the last row, check in FinishScanRange() if there is an
unfinished tuple with materialized slots or materialized partition key. Write
the fields if necessary.

Change-Id: I2808cc228e62d048d917d3a6352d869d117597ab
(cherry picked from commit c1795a8b40d10fbb32d9051a0e7de5ebffc8a6bd)
Reviewed-on: http://gerrit.cloudera.org:8080/364
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
2015-05-05 00:19:12 +00:00
Alex Behm
35e669f2ee Nested Types: Analysis of struct fields and basic collection table refs.
This patch adds analysis support for the following new constructs.

Struct-field references:
Dot-separated paths such as "tblalias.col.field1.field2..." can now be used
to access fields of a struct. Such struct-field SlotRefs show up as new slot
in the tuple descriptor of the parent table.

Collection table references:
A collection-typed column or field can be referenced as a table in the FROM
clause using a dot-separated path, e.g., "FROM db.tbl.array_col". This exposes
the collection's fields as a table whose columns can be referenced as usual.
Collections referenced this way are flattened during execution. The following
subsections explain the analysis behavior in more detail:

Aliases:
Collection table references can either have an explicit alias a single
implicit alias. Examples:

SELECT ... FROM db.tbl.array_col AS a
-> explicit alias 'a'

SELECT ... FROM db.tbl.array_col
-> implicit alias 'array_col'

Flattening of collections:
Collection table refs can refer to table aliases defined to their left
in the same select block. References linked in this way indicate an implicit
parent/child join. Example:

SELECT parent.id, child.item FROM db.tbl parent, parent.array_col child

The aliases 'parent' and 'child' can be used to access the columns of 'tbl' and
the fields of 'array_col', respectively. In the flattened result set, collection
items implicitly join with parent rows iff the collection row is nested within
the parent.

The following stmt is not an implicit join, but a scan of a nested collection.

SELECT db.tbl.map_col.value FROM db.tbl.map_col

The following stmt is not a parent/child join but a cartesian product
because the collection does not reference an existing table alias.

SELECT a.id, map_col.value FROM db.tbl a, db.tbl.map_col

Collection 'table' field names:
Structs have explicit field names, but collections generally do not. This patch
uses fixed implicit field names such as "item" for arrays and "key"/"value"
for maps. If an array or map has a struct element/value then the struct's fields
are also accessible without the "item"/"value" indirection.

Analysis prevents complex-typed exprs from appearing in the select list of any
query statement because it's not yet clear how to present them to a consumer
of the query results.

To keep things simple, I've deliberately left these work items for
follow-on patches:
1. analysis of correlated inline views to handle queries like:
   SELECT cnt FROM customers c (SELECT COUNT(1) FROM c.orders) v
2. generalized expression substitution to handle cases like:
   SELECT x.c.d FROM (SELECT a.b x FROM tbl)

Change-Id: I5f52f083bcf7056d5bfd21cc784133edb7e82ef2
Reviewed-on: http://gerrit.cloudera.org:8080/43
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
2015-04-29 09:04:25 +00:00
ishaan
90bc481be7 Fix bad auto-merge to get cdh-trunk in shape.
While cherry-picking commit c289974 from cdh5-2.2.0_5.4.x, a line that causes a dataset
change was somehow missed by git's cherry-pick. This patch adds that line back in.

Change-Id: Id163643c55d44fe6a9503b54397f1f370b198554
2015-03-23 14:15:53 -07:00
Juan Yu
e121bc9b0a IMPALA-1476: Impala incorrectly handles text data missing a newline on the last line.
I did a local benchmark and there's minimal performance impact(<1%)

Change-Id: I8d84a145acad886c52587258b27d33cff96ea399
(cherry picked from commit 7e750ad5d90007cc85ebe493af4dce7a537ad7c0)
Reviewed-on: http://gerrit.cloudera.org:8080/189
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
2015-03-20 19:58:50 -07:00
ishaan
d87f83083c Revert "Fix to enable metadata load for functional tables."
This reverts commit 587fedaa983d0b99e79025782d8c3d1891fbb0f5.

Change-Id: I6cf34f6f6fc0ee8af6ec7da407569033605b31e4
2015-03-20 19:58:50 -07:00
ishaan
daf9dfd6e1 Fix to enable metadata load for functional tables.
We used mkdir without a -p to load a hive13 parquet table. This failed when trying to load
the metadata, which is necessary for changes that mutate the warehouse snapshot. With this
change, it now passes.

Change-Id: Id163643c55d44fe6a9503b54397f1f370b198554
2015-03-20 14:37:21 -07:00
casey
87b9fac2ad IMPALA-1658: Add compatibility flag for Hive-Parquet-Timestamps
No changes to writing were made. No changes to reading Impala written
files were made.

Hive writes TIMESTAMP values to parquet files differently than Impala
does. Hive converts the value from local time to UTC before writing;
Impala does not. This change adds a startup flag that will convert UTC
to local when reading files written by Hive.

The Hive-file detection actually checks for "parquet-mr" (which is the
library Hive uses) in the file metadata. A slight possibility exists
that TIMESTAMP values written by something other than Hive but also
using parquet-mr may become incorrect. The possibility should be very
small because TIMESTAMP values are stored and encoded in a non-standard
way other applications are unlikely to be aware of.

Flags from be/src/exec/hdfs-parquet-scanner.cc:
  -convert_legacy_hive_parquet_utc_timestamps (When true, TIMESTAMPs
    read from files written by Parquet-MR (used by Hive) will be
    converted from UTC to local time. Writes are unaffected.) type: bool
    default: false

Change-Id: I79a499fe24049b7025ee2dd76c9c3e07010d346a
Reviewed-on: http://gerrit.cloudera.org:8080/35
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
2015-02-11 13:28:17 +00:00
Dan Hecht
3735ea94a0 S3: Don't seek/read past file end
DistributedFileSystem is lenient about seeking past the end of the file.
Other FileSystem implementations, such as NativeS3FileSystem, return an
error on this condition.  That leads to a scary looking message in the
query warnings.

So, when creating scan ranges, let's require that the ranges fall within
the file bounds (at least according to what the HdfsFileDesc indicates
is the length). There were a couple of kinds of AllocateScanRange()
callsites that needed to be fixed up:

1) When a stream wants to read past a scan range, be careful not to read
past the end of the file.

2) When Impala needs to "guess" at the length of a range, use the
file_length as an upper bound on the guess.  We were already doing this
someplaces but not everywhere.

3) When the scan range is derived from parquet metadata, validate the
metadata against file_length and issue appropriate errors.  This will
give better diagnostics for corrupt files.

Note that we can't rely on this for safety (HdfsFileDesc file_length may
be stale), but it does mean that when metadata is up-to-date Impala will
no longer try to access beyond the end of files (and so we'll no longer
get false positive errors from the filesystem).

Additionally, this change revealed a pre-existing problem with files
that have multiple row-groups.  The first time through InitColumns(),
stream_ was set to NULL.  But, stream_->filename could potentially be
accessed when constructing error statuses for subsequent row-groups.

Change-Id: Ia668fa8c261547f85a18a96422846edcea57043e
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5424
Reviewed-by: Daniel Hecht <dhecht@cloudera.com>
Tested-by: jenkins
2015-01-08 16:19:35 -08:00
Matthew Jacobs
25428fdb21 Add support for streaming decompression of gzip text
Compressed text formats currently require entire compressed
files be read into memory to be decompressed in a single call
to the decompression codec. This changes the HdfsTextScanner
to drive gzip in a streaming mode, i.e. produce partial output
as input is consumed.

Change-Id: Id5c0805e18cf6b606bcf27a5df4b5f58895809fd
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5233
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: jenkins
(cherry picked from commit 05c3cc55e7a601d97adc4eebe03f878c68a33e56)
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5385
2014-11-23 01:55:55 -08:00
Victor Bittorf
9939c9d009 Bugfix and tests for CHAR(N) and VARCHAR(N)
Fixed a bug when setting the length in reading/write text files for CHAR(N).
Also added chars_tiny table for testing CHAR(N) and VARCHAR(N).

Change-Id: If5d5db30afa4b00cf03c68c6a845f182970329f4
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/4415
Reviewed-by: Victor Bittorf <victor.bittorf@cloudera.com>
Tested-by: jenkins
2014-09-23 07:30:07 -07:00
Alex Behm
68592a82a3 IMPALA-1021: Fix loading of views with decimal and complex-typed columns.
Change-Id: I8b63c31be47dd64f1e13fb29be3105b0f7e245dc
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/3820
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: jenkins
2014-08-17 12:26:54 -07:00
Lenni Kuff
286e312460 [CDH5] Minor code changes for Hive .13 support
Changes include:
* Fix compile errors due to new column stats API and other stats related
  fixes.
* Temporarily disable JDBC tests due to new serialization format in Hive .13
* Disable view compatibility tests until we can get them to work in Hive .13
* Test fixes due to Hive's type checking for partition column values

Change-Id: I05cc6a95976e0e037be79d91bc330a06d2fdc46c
2014-08-11 09:53:02 -07:00
Alex Behm
19bab59854 Create/alter/describe tables with complex types.
This patch adds parsing of complex types and tests for using complex
types in various exprs and create/alter/describe stmts.

Change-Id: Ibc211a560c889f5ccfb616813700b923c89d8245
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3577
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3594
2014-07-23 17:26:14 -07:00
Dimitris Tsirogiannis
2aedf5fab4 Add missing ALTER TABLE statement in alltypesaggmultifiles table.
The DDL statements for adding the partitions of alltypesaggmultifiles
did not include an ALTER TABLE stmt for one of the partitions, thereby
causing the planner tests to fail when test data were loaded from a
snapshot.

Change-Id: Id4b078cd334d816d6eb8eb15e5856189701a4bca
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3305
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3310
2014-06-27 18:00:09 -07:00
Dimitris Tsirogiannis
6a795915d6 Fix loading data from snapshopt for alltypesagg table.
The alltypesagg table was not loaded correctly from a snapshot file due
to a missing ALTER TABLE statement, thereby causing some tests to fail.

Change-Id: I74066a99529f24fc268bb5779d3fb64fbd4f66b9
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3248
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3270
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
2014-06-25 21:52:11 -07:00
Dimitris Tsirogiannis
5a6f53db16 Add partition pruning tests
The following changes are included in this commit:
1. Modified the alltypesagg table to include an additional partition key
that has nulls.
2. Added a number of tests in hdfs.test that exercise the partition
pruning logic (see IMPALA-887).
3. Modified all the tests that are affected by the change in alltypesagg.

Change-Id: I1a769375aaa71273341522eb94490ba5e4c6f00d
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2874
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3236
2014-06-24 02:14:27 -07:00
Dimitris Tsirogiannis
7dbd3a5860 IMPALA-1040: Reading a decimal partitioned column with invalid values
This commit fixes IMPALA-1040 in which when an invalid value is inserted
to a decimal partitioned column through hive it results in a non
informative error message and in some cases in the associated table to
disappear from Impala's catalog. The fix results in a more informative
error message to always be thrown by Impala to indicate the insertion of
an invalid partition key value.

Change-Id: I2855ea69944e269fb7e02b3825f44e64352151e7
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3062
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3200
2014-06-20 12:46:52 -07:00
Nong Li
8f4dc0f2f0 IMPALA-974: Switch from FloatLiteral to DecimalLiteral.
Float/Doubles are lossy so using those as the default literal type
is problematic.

Change-Id: I5a619dd931d576e2e6cd7774139e9bafb9452db9
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2758
Reviewed-by: Nong Li <nong@cloudera.com>
Tested-by: jenkins
2014-05-31 22:19:06 -07:00
Alex Behm
b252921363 IMPALA-994: Handle incorrect column metadata in views created by Hive.
Change-Id: I3fba08d191c479f37371ce50fd07b8476a73eba2
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2613
Reviewed-by: Marcel Kornacker <marcel@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2618
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
2014-05-19 20:17:23 -07:00
Dimitris Tsirogiannis
a7a9cde86f CDH-18969: Incorrect query result in Impala
This commit fixes issue CDH-18969 where Impala returns wrong results
when querying an HBase table. This issue is triggered when a column family
sorts lexicographically before ":key", which is the column family of the
row key, thereby causing the wrong column to be used as a row key by the
backend.

The following changes are included:
1. Modified the load function in HBaseTable.java to make sure the
catalog object of an HBase table always stores the row key column first.

Change-Id: Icd7ebc973d81672c04d5c7c8bbabd813338d5eac
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2513
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2602
2014-05-18 16:29:11 -07:00
Skye Wanderman-Milne
edbbe6035e Decimal: read from Avro
Allows reading decimal columns with or without codegen. Includes tests
based on a data file posted on HIVE-5823.

Change-Id: Ie541c6b98bd24543691850cb45a434af60b5a5a6
(cherry picked from commit 6983dcefdf70cce14724e17d03bc061ffb8f671c)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2596
Reviewed-by: Skye Wanderman-Milne <skye@cloudera.com>
Tested-by: jenkins
2014-05-16 22:26:11 -07:00
Nong Li
03e5665e56 Decimal: Read/Write to parquet.
This adds support for the FIXED_LENGTH_BYTE_ARRAY parquet type and
encoding for decimals.

Change-Id: I9d5780feb4530989b568ec8d168cbdc32b7039bd
Reviewed-on: http://gerrit.ent.cloudera.com:8080/1727
Reviewed-by: Nong Li <nong@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2432
2014-05-02 16:38:35 -07:00
Skye Wanderman-Milne
60db4d4d82 CDH-18416: Don't inline ReadWriteUtil::ReadZLong()
For wide Avro tables, ReadZLong() would get inlined many times into a
single function body, causing LLVM to crash. Not inlining doesn't seem
to have a performance impact on narrow tables, and helps with wide
tables.

This change also adds tests over wide (i.e. many-column) tables. The
test tables are produced by specifying shell commands to generate test
tables in functional_schema_template.sql, which are executed in
generate-schema-statements.py. In the SQL templates, sections starting
with a ` are treated as shell commands. The output of the shell
command is then used as the section text. This is only a starting
point; it isn't currently implemented for all sections, and may have
to be tweaked if we use this mechanism for all tables.

Change-Id: Ife0d857d19b21534167a34c8bc06bc70bef34910
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2206
Reviewed-by: Skye Wanderman-Milne <skye@cloudera.com>
Tested-by: Skye Wanderman-Milne <skye@cloudera.com>
(cherry picked from commit 1c5951e3cce25a048208ab9bb3a3aed95e41cf67)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2353
Tested-by: jenkins
2014-04-28 15:58:15 -07:00