Files
impala/tests/query_test/test_multiple_filesystems.py
Sahil Takiar ac87278b16 IMPALA-8950: Add -d, -f options to hdfs copyFromLocal, put, cp
Add the -d option and -f option to the following commands:

`hdfs dfs -copyFromLocal <localsrc> URI`
`hdfs dfs -put [ - | <localsrc1> .. ]. <dst>`
`hdfs dfs -cp URI [URI ...] <dest>`

The -d option "Skip[s] creation of temporary file with the suffix
._COPYING_." which improves performance of these commands on S3 since S3
does not support metadata only renames.

The -f option "Overwrites the destination if it already exists" combined
with HADOOP-13884 this improves issues seen with S3 consistency issues by
avoiding a HEAD request to check if the destination file exists or not.

Added the method 'copy_from_local' to the BaseFilesystem class.
Re-factored most usages of the aforementioned HDFS commands to use
the filesystem_client. Some usages were not appropriate / worth
refactoring, so occasionally this patch just adds the '-d' and '-f'
options explicitly. All calls to '-put' were replaced with
'copyFromLocal' because they both copy files from the local fs to a HDFS
compatible target fs.

Since WebHDFS does not have good support for copying files, this patch
removes the copy functionality from the PyWebHdfsClientWithChmod.
Re-factored the hdfs_client so that it uses a DelegatingHdfsClient
that delegates to either the HadoopFsCommandLineClient or
PyWebHdfsClientWithChmod.

Testing:
* Ran core tests on HDFS and S3

Change-Id: I0d45db1c00554e6fb6bcc0b552596d86d4e30144
Reviewed-on: http://gerrit.cloudera.org:8080/14311
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2019-10-05 00:04:08 +00:00

68 lines
3.0 KiB
Python

# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# Validates table stored on the LocalFileSystem.
#
import pytest
from subprocess import check_call, call
from tests.common.impala_test_suite import ImpalaTestSuite
from tests.common.skip import SkipIf
from tests.common.test_dimensions import create_single_exec_option_dimension
from tests.util.filesystem_utils import get_secondary_fs_path
@SkipIf.no_secondary_fs
class TestMultipleFilesystems(ImpalaTestSuite):
"""
Tests that tables and queries can span multiple filesystems.
"""
@classmethod
def get_workload(self):
return 'functional-query'
@classmethod
def add_test_dimensions(cls):
super(TestMultipleFilesystems, cls).add_test_dimensions()
cls.ImpalaTestMatrix.add_dimension(create_single_exec_option_dimension())
cls.ImpalaTestMatrix.add_constraint(lambda v:\
v.get_value('table_format').file_format == 'text' and \
v.get_value('table_format').compression_codec == 'none')
def _populate_secondary_fs_partitions(self, db_name):
# This directory may already exist. So we needn't mind if this call fails.
call(["hadoop", "fs", "-mkdir", get_secondary_fs_path("/multi_fs_tests/")], shell=False)
check_call(["hadoop", "fs", "-mkdir",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % db_name)], shell=False)
self.filesystem_client.copy("/test-warehouse/alltypes_parquet/",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % db_name), overwrite=True)
self.filesystem_client.copy("/test-warehouse/tinytable/", get_secondary_fs_path(
"/multi_fs_tests/%s.db/" % db_name), overwrite=True)
@pytest.mark.execute_serially
def test_multiple_filesystems(self, vector, unique_database):
try:
self._populate_secondary_fs_partitions(unique_database)
self.run_test_case('QueryTest/multiple-filesystems', vector, use_db=unique_database)
finally:
# We delete this from the secondary filesystem here because the database was created
# in HDFS but the queries will create this path in the secondary FS as well. So
# dropping the database will not delete the directory in the secondary FS.
check_call(["hadoop", "fs", "-rm", "-r",
get_secondary_fs_path("/multi_fs_tests/%s.db/" % unique_database)], shell=False)