Files
impala/tests/query_test/test_multiple_filesystems.py
Sailesh Mukil 6f1fe4ebe7 IMPALA-3577, IMPALA-3486: Partitions on multiple filesystems breaks with S3_SKIP_INSERT_STAGING
The HdfsTableSink usualy creates a HDFS connection to the filesystem
that the base table resides in. However, if we create a partition in
a FS different than that of the base table and set
S3_SKIP_INSERT_STAGING to "true", the table sink will try to write to
a different filesystem with the wrong filesystem connector.

This patch allows the table sink itself to work with different
filesystems by getting rid of a single FS connector and getting a
connector per partition.

This also reenables the multiple_filesystems test and modifies it to
use the unique_database fixture so that parallel runs on the same
bucket do not clash and end up in failures.

This patch also introduces a SECONDARY_FILESYSTEM environment variable
which will be set by the test to allow S3, Isilon and the localFS to
be used as the secondary filesystems.

All jobs with HDFS as the default filesystem need to set the
appropriate environment for S3 and Isilon, i.e. the following:
 - export AWS_SECERT_ACCESS_KEY
 - export AWS_ACCESS_KEY_ID
 - export SECONDARY_FILESYSTEM (to whatever filesystem needs to be
   tested)

TODO: SECONDARY_FILESYSTEM and FILESYSTEM_PREFIX and NAMENODE have a
lot of similarities. Need to clean them up in a following patch.

Change-Id: Ib13b610eb9efb68c83894786cea862d7eae43aa7
Reviewed-on: http://gerrit.cloudera.org:8080/3146
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
2016-05-31 23:32:11 -07:00

52 lines
2.3 KiB
Python

# Copyright (c) 2015 Cloudera, Inc. All rights reserved.
# Validates table stored on the LocalFileSystem.
#
import pytest
import os
from subprocess import check_call, call
from tests.common.impala_test_suite import ImpalaTestSuite
from tests.common.test_dimensions import create_single_exec_option_dimension
from tests.common.skip import SkipIf
from tests.util.filesystem_utils import get_secondary_fs_path, S3_BUCKET_NAME, ISILON_NAMENODE
@SkipIf.no_secondary_fs
class TestMultipleFilesystems(ImpalaTestSuite):
"""
Tests that tables and queries can span multiple filesystems.
"""
@classmethod
def get_workload(self):
return 'functional-query'
@classmethod
def add_test_dimensions(cls):
super(TestMultipleFilesystems, cls).add_test_dimensions()
cls.TestMatrix.add_dimension(create_single_exec_option_dimension())
cls.TestMatrix.add_constraint(lambda v:\
v.get_value('table_format').file_format == 'text' and \
v.get_value('table_format').compression_codec == 'none')
def _populate_secondary_fs_partitions(self, db_name):
# This directory may already exist. So we needn't mind if this call fails.
call(["hadoop", "fs", "-mkdir", get_secondary_fs_path("/multi_fs_tests/")], shell=False)
check_call(["hadoop", "fs", "-mkdir",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % db_name)], shell=False)
check_call(["hadoop", "fs", "-cp", "/test-warehouse/alltypes_parquet/",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % db_name)], shell=False)
check_call(["hadoop", "fs", "-cp", "/test-warehouse/tinytable/",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % db_name)], shell=False)
@pytest.mark.execute_serially
def test_multiple_filesystems(self, vector, unique_database):
try:
self._populate_secondary_fs_partitions(unique_database)
self.run_test_case('QueryTest/multiple-filesystems', vector, use_db=unique_database)
finally:
# We delete this from the secondary filesystem here because the database was created
# in HDFS but the queries will create this path in the secondary FS as well. So
# dropping the database will not delete the directory in the secondary FS.
check_call(["hadoop", "fs", "-rm", "-r",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % unique_database)], shell=False)