Files
impala/testdata/data
Alex Behm d3cc23e569 IMPALA-5021: Fix count(*) remaining rows overflow in Parquet.
Zero-slot scans of Parquet files that have num_rows > MAX_INT32
in the footer metadata used to run forever due to an overflow when
calculating the remaining number of rows to process.

Testing:
- Added a regression test using a file with num_rows = 2*MAX_INT32.
- Locally ran test_scanners.py which succeeded.
- Private core/hdfs run succeeded

Change-Id: Ib9f8a6b83f8f621451d5977423ef81a6e4b124bd
Reviewed-on: http://gerrit.cloudera.org:8080/6286
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Impala Public Jenkins
2017-03-08 02:00:30 +00:00
..
2015-02-27 18:48:56 +00:00
2014-05-16 22:26:11 -07:00
2014-04-14 21:07:32 -07:00
2014-04-14 21:07:32 -07:00
2014-01-08 10:48:41 -08:00

bad_parquet_data.parquet:
Generated with parquet-mr 1.2.5
Contains 3 single-column rows:
"parquet"
"is"
"fun"

bad_rle_literal_count.parquet:
Generated by hacking Impala's Parquet writer.
Contains a single bigint column 'c' with the values 1, 3, 7 stored
in a single data chunk as dictionary plain. The RLE encoded dictionary
indexes are all literals (and not repeated), but the literal count
is incorrectly 0 in the file to test that such data corruption is
proprly handled.

bad_rle_repeat_count.parquet:
Generated by hacking Impala's Parquet writer.
Contains a single bigint column 'c' with the value 7 repeated 7 times
stored in a single data chunk as dictionary plain. The RLE encoded dictionary
indexes are a single repeated run (and not literals), but the repeat count
is incorrectly 0 in the file to test that such data corruption is proprly
handled.

zero_rows_zero_row_groups.parquet:
Generated by hacking Impala's Parquet writer.
The file metadata indicates zero rows and no row groups.

zero_rows_one_row_group.parquet:
Generated by hacking Impala's Parquet writer.
The file metadata indicates zero rows but one row group.

huge_num_rows.parquet
Generated by hacking Impala's Parquet writer.
The file metadata indicates 2 * MAX_INT32 rows.
The single row group also has the same number of rows in the metadata.

repeated_values.parquet:
Generated with parquet-mr 1.2.5
Contains 3 single-column rows:
"parquet"
"parquet"
"parquet"

multiple_rowgroups.parquet:
Generated with parquet-mr 1.2.5
Populated with:
hive> set parquet.block.size=500;
hive> INSERT INTO TABLE tbl
      SELECT l_comment FROM tpch.lineitem LIMIT 1000;

alltypesagg_hive_13_1.parquet:
Generated with parquet-mr version 1.5.0-cdh5.4.0-SNAPSHOT
hive> create table alltypesagg_hive_13_1 stored as parquet as select * from alltypesagg;

bad_column_metadata.parquet:
Generated with hacked version of parquet-mr 1.8.2-SNAPSHOT
Schema:
 {"type": "record",
  "namespace": "org.apache.impala",
  "name": "bad_column_metadata",
  "fields": [
      {"name": "id", "type": ["null", "long"]},
      {"name": "int_array", "type": ["null", {"type": "array", "items": ["null", "int"]}]}
  ]
 }
Contains 3 row groups, each with ten rows and each array containing ten elements. The
first rowgroup column metadata for 'int_array' incorrectly states there are 50 values
(instead of 100), and the second rowgroup column metadata for 'id' incorrectly states
there are 11 values (instead of 10). The third rowgroup has the correct metadata.

data-bzip2.bz2
Generated with bzip2, contains single bzip2 stream
Contains 1 column, uncompressed data size < 8M

large_bzip2.bz2
Generated with bzip2, contains single bzip2 stream
Contains 1 column, uncompressed data size > 8M

data-pbzip2.bz2
Generated with pbzip2, contains multiple bzip2 streams
Contains 1 column, uncompressed data size < 8M

large_pbzip2.bz2
Generated with pbzip2, contains multiple bzip2 stream
Contains 1 column, uncompressed data size > 8M

out_of_range_timestamp.parquet:
-----------
Generated with a hacked version of Impala parquet writer.
Contains a single timestamp column with 4 values, 2 of which are out of range
and should be read as NULL by Impala:
   1399-12-31 00:00:00 (invalid - date too small)
   1400-01-01 00:00:00
   9999-12-31 00:00:00
  10000-01-01 00:00:00 (invalid - date too large)