Files
impala/tests/custom_cluster/test_disable_features.py
Eyizoha 2f06a7b052 IMPALA-10798: Initial support for reading JSON files
Prototype of HdfsJsonScanner implemented based on rapidjson, which
supports scanning data from splitting json files.

The scanning of JSON data is mainly completed by two parts working
together. The first part is the JsonParser responsible for parsing the
JSON object, which is implemented based on the SAX-style API of
rapidjson. It reads data from the char stream, parses it, and calls the
corresponding callback function when encountering the corresponding JSON
element. See the comments of the JsonParser class for more details.

The other part is the HdfsJsonScanner, which inherits from HdfsScanner
and provides callback functions for the JsonParser. The callback
functions are responsible for providing data buffers to the Parser and
converting and materializing the Parser's parsing results into RowBatch.
It should be noted that the parser returns numeric values as strings to
the scanner. The scanner uses the TextConverter class to convert the
strings to the desired types, similar to how the HdfsTextScanner works.
This is an advantage compared to using number value provided by
rapidjson directly, as it eliminates concerns about inconsistencies in
converting decimals (e.g. losing precision).

Added a startup flag, enable_json_scanner, to be able to disable this
feature if we hit critical bugs in production.

Limitations
 - Multiline json objects are not fully supported yet. It is ok when
   each file has only one scan range. However, when a file has multiple
   scan ranges, there is a small probability of incomplete scanning of
   multiline JSON objects that span ScanRange boundaries (in such cases,
   parsing errors may be reported). For more details, please refer to
   the comments in the 'multiline_json.test'.
 - Compressed JSON files are not supported yet.
 - Complex types are not supported yet.

Tests
 - Most of the existing end-to-end tests can run on JSON format.
 - Add TestQueriesJsonTables in test_queries.py for testing multiline,
   malformed, and overflow in JSON.

Change-Id: I31309cb8f2d04722a0508b3f9b8f1532ad49a569
Reviewed-on: http://gerrit.cloudera.org:8080/19699
Reviewed-by: Quanlong Huang <huangquanlong@gmail.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2023-09-05 16:55:41 +00:00

67 lines
2.9 KiB
Python

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import absolute_import, division, print_function
import pytest
from tests.common.custom_cluster_test_suite import CustomClusterTestSuite
from tests.common.parametrize import UniqueDatabase
from tests.common.skip import SkipIfFS
class TestDisableFeatures(CustomClusterTestSuite):
"""Tests that involve disabling features at startup."""
@classmethod
def get_workload(self):
return 'functional-query'
@SkipIfFS.hdfs_caching
@pytest.mark.execute_serially
@UniqueDatabase.parametrize(sync_ddl=True)
@CustomClusterTestSuite.with_args(
catalogd_args="--enable_incremental_metadata_updates=false")
def test_disable_incremental_metadata_updates(self, vector, unique_database):
"""Canary tests for disabling incremental metadata updates. Copy some partition
related tests in metadata/test_ddl.py here."""
vector.get_value('exec_option')['sync_ddl'] = True
self.run_test_case('QueryTest/alter-table-hdfs-caching', vector,
use_db=unique_database, multiple_impalad=True)
self.run_test_case('QueryTest/alter-table-set-column-stats', vector,
use_db=unique_database, multiple_impalad=True)
@pytest.mark.execute_serially
@CustomClusterTestSuite.with_args("--allow_ordinals_in_having=true")
def test_allow_ordinals_in_having(self, vector):
"""Mirror the FE tests in AnalyzeStmtsTest#TestHavingIntegers to make sure the flag
can bring back the legacy feature"""
self.client.execute("select not bool_col as nb from functional.alltypes having 1")
self.execute_query_expect_failure(
self.client, "select count(*) from functional.alltypes having 1")
self.client.execute("select count(*) > 10 from functional.alltypes having 1")
self.execute_query_expect_failure(
self.client,
"select sum(id) over(order by id) from functional.alltypes having 1")
self.execute_query_expect_failure(
self.client,
"select sum(id) over(order by id) from functional.alltypes having -1")
@pytest.mark.execute_serially
@CustomClusterTestSuite.with_args("--enable_json_scanner=false")
def test_disable_json_scanner(self, vector):
self.run_test_case('QueryTest/disable-json-scanner', vector)