Files
impala/testdata/bin/create-table-many-blocks.sh
Martin Grund ce4c5f6743 IMPALA-4365: Enabling end-to-end tests on a remote cluster
This patch lays the groundwork for loading data and running end-to-end
tests on a remote CDH cluster. The requirements for the cluster to run
the tests are:

  - Managed by Cloudera Manager (CM)
  - GPL Extras need to be installed
  - KMS and KeyTrustee installed and available as a service
  - SERDEPROPERTIES in the Hive DB modified to accept wide tables
  - Hive warehouse dir points to /test-warehouse

The actual data loading is done via a new script, remote_data_load.py,
which takes the CM host as an argument. It can be run from a client
machine that is not a node of the cluster, but it needs to have the
Impala repo checked out and Impala built. This insures that all of the
necessary data load scripts are available, as well as setting up the
environment properly (client binaries like beeline and the hbase shell
are available, python libraries like cm_api are installed, necessary
environment variables are defined, etc.)

It should be noted that running remote_data_load.py will overwrite
any local XML config files with the configurations downloaded from
the remote cluster.

Usage: remote_data_load.py [options] <cm_host address>

Options:
  -h, --help            show this help message and exit
  --snapshot-file=SNAPSHOT_FILE
                        Path to the test-warehouse archive
  --cm-user=CM_USER     Cloudera Manager admin user
  --cm-pass=CM_PASS     Cloudera Manager admin user password
  --gateway=GATEWAY     Gateway host to upload the data from. If not
                        set, uses the CM host as gateway.
  --ssh-user=SSH_USER   System user on the remote machine with
                        passwordless SSH configured.
  --no-load             Do not try to load the snapshot
  --exploration-strategy=EXPLORATION_STRATEGY
  --test                Run end-to-end tests against cluster

Testing:

This patch is being submitted with the understanding that there are
still clean up issues that need to be addressed in the remote data
load script, for which JIRA's have been filed.

However, since many of the existing build scripts also had to be
modified, it is more important to make sure that no regressions were
inadvertently introduced into the existing data load process. Loading
data to a local mini-cluster was checked repeatedly while this patch
was being developed, as well as running it against the Jenkins job
that provides the test-warehouse snapshot used by the many other
Impala CI builds that run daily.

Change-Id: I1f443a1728a1d28168090c6f54e82dec2cb073e9
Reviewed-on: http://gerrit.cloudera.org:8080/4769
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Internal Jenkins
2016-11-08 10:16:55 +00:00

110 lines
3.7 KiB
Bash
Executable File

#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Script that allows easily creating tables with a large number of partitions
# and/or blocks. To achieve generation of a large number of blocks, the script
# generates many tiny files. Each file will be assigned a unique block. These
# files are copied to HDFS and all partitions are mapped to this location. This
# way a table with 100K blocks can be created by using 100 partitions x 1000
# blocks/files.
set -euo pipefail
trap 'echo Error in $0 at line $LINENO: $(cd "'$PWD'" && awk "NR == $LINENO" $0)' ERR
. ${IMPALA_HOME}/bin/impala-config.sh > /dev/null 2>&1
# Environment variables needed for remote cluster
: ${HS2_HOST_PORT:=localhost:11050}
JDBC_URL="jdbc:hive2://${HS2_HOST_PORT}/default;"
HIVE_CMD="beeline -n $USER -u $JDBC_URL"
LOCAL_OUTPUT_DIR=$(mktemp -dt "impala_test_tmp.XXXXXX")
echo $LOCAL_OUTPUT_DIR
BLOCKS_PER_PARTITION=-1
NUM_PARTITIONS=-1
# parse command line options
while getopts "p:b:" OPTION
do
case "$OPTION" in
p)
NUM_PARTITIONS=$OPTARG
;;
b)
BLOCKS_PER_PARTITION=$OPTARG
;;
?)
echo "create-table-many-blocks.sh -p <num partitions> -b <num blocks / partition>"
exit 1;
;;
esac
done
if [ $NUM_PARTITIONS -lt 1 ]; then
echo "Must specify a value of 1 or more for the number of partitions"
exit 1
fi
if [ $BLOCKS_PER_PARTITION -lt 0 ]; then
echo "Must specify a value of 0 or greater for blocks per partition"
exit 1
fi
HDFS_PATH=/test-warehouse/many_blocks_num_blocks_per_partition_${BLOCKS_PER_PARTITION}/
DB_NAME=scale_db
TBL_NAME=num_partitions_${NUM_PARTITIONS}_blocks_per_partition_${BLOCKS_PER_PARTITION}
$HIVE_CMD -e "create database if not exists scale_db"
$HIVE_CMD -e "drop table if exists ${DB_NAME}.${TBL_NAME}"
$HIVE_CMD -e "create external table ${DB_NAME}.${TBL_NAME} (i int) partitioned by (j int)"
# Generate many (small) files. Each file will be assigned a unique block.
echo "Generating ${BLOCKS_PER_PARTITION} files"
for b in $(seq ${BLOCKS_PER_PARTITION})
do
echo $b > ${LOCAL_OUTPUT_DIR}/impala_$b.data
done
echo "Copying data files to HDFS"
hadoop fs -rm -r -f ${HDFS_PATH}
hadoop fs -mkdir -p ${HDFS_PATH}
hadoop fs -put ${LOCAL_OUTPUT_DIR}/* ${HDFS_PATH}
echo "Generating DDL statements"
# Use Hive to create the partitions because it supports bulk adding of partitions.
# Hive doesn't allow fully qualified table names in ALTER statements, so start with a
# USE <db>.
echo "use ${DB_NAME};" > ${LOCAL_OUTPUT_DIR}/hive_create_partitions.q
# Generate the H-SQL bulk partition DDL statement
echo "ALTER TABLE ${TBL_NAME} ADD " >> ${LOCAL_OUTPUT_DIR}/hive_create_partitions.q
for p in $(seq ${NUM_PARTITIONS})
do
echo " PARTITION (j=$p) LOCATION '${HDFS_PATH}'" >>\
${LOCAL_OUTPUT_DIR}/hive_create_partitions.q
done
echo ";" >> ${LOCAL_OUTPUT_DIR}/hive_create_partitions.q
echo "Executing DDL via Hive"
$HIVE_CMD -f ${LOCAL_OUTPUT_DIR}/hive_create_partitions.q
echo "Done! Final result in table: ${DB_NAME}.${TBL_NAME}"