For some time Impala in a production environment has been able
to access data stored in Amazon S3 buckets using credentials specified
in a number of ways:
- storing Amazon access keys in environment variables or
in core-site.xml.
- using proprietary management tools to store Amazon access keys
securely
- using Amazon IAM roles bound to VMs running in EC2.
The development minicluster environment used the first approach,
which risked leaking these keys.
This change enables Impala builds to use IAM
roles to access S3 buckets when running on an Amazon EC2 virtual
machine. The changes mainly ensure that environment variables carrying
the traditional AWS credentials do not conflict with credentials supplied
by the IAM role attached to the VM instance.
IAM role based credentials are accessible through the EC2
instance-property mechanism; for further details see Amazon's docs at
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#instance-metadata-security-credentials
The change also removes the remaining references to the s3n: provider.
In the FE tests all URIs referring to s3n: are replaced with their
s3a: equivalents, except for a single negative test in
AnalyzeStmtsTest.java, which is removed.
In addition to the code changes, the s3n: and s3a: credential properties
are also removed from core-site.xml.tmpl. The s3a: provider can pick up
AWS S3 credentials from environment variables or IAM properties bound
to the VM instance, which is a more flexible approach.
As environment variables have precedence over IAM roles, care must be
taken when managing the canonical environment variables carrying
AWS credentials. There are two requirements to be reconciled:
1. The FE tests have code that examines s3a: URIs; this code needs
existing, but not necessarily valid AWS credentials.
2. When the Impala test suite is executed on an EC2 VM, AWS credentials
can be supplied via IAM roles. These credentials can be used only
if the AWS_* environment variables are unset (do not exist).
The tradeoff is managed following these rules:
1. When AWS_* environment variables are set before invoking the
Impala configuration scripts, their value is preserved and
the config scripts ensure that the variables are exported.
2. If the AWS_* variables are missing or empty, they will be unset
to ensure that credentials supplied by Amazon's IAM roles can be
accessed,
3. except if the scripts are running outside of EC2 (so there can be
no IAM roles) and TARGET_FILESYSTEM is not set "s3". This combination
is most often the case on a developer's local workstation.
In this case the AWS_* credential variables are forcibly set to
dummy values to allow the FE tests to succeed.
The removal of S3 credential parameters from core-site.xml[.tmpl]
also allows users to set up their own credentials there,
the config scripts will not change those settings.
Environment variables carrying AWS security credentials will be set
up according to the following table:
Instance: Running outside EC2 || Running in EC2 |
--------------------+--------+--------++--------+--------+
TARGET_FILESYSTEM | S3 | not S3 || S3 | not S3 |
--------------------+--------+--------++--------+--------+
| | || | |
empty | unset | dummy || unset | unset |
AWS_* | | || | |
env --------------+--------+--------++--------+--------+
var | | || | |
not empty | export | export || export | export |
| | || | |
--------------------+--------+--------++--------+--------+
Legend: unset: the variable is unset
export: the variable is exported with its current value
dummy: the variable is set to a preset dummy value and
exported
Running on an EC2 VM is indicated by setting RUNNING_IN_EC2 to "true" and
exporting it before impala_config.sh is invoked.
The change also moves the logic performing the S3 access checks into a separate
script file: bin/check-s3-access.sh. This file now contains all the S3-specific
logic and network access to check if the requested S3 bucket can be accessed.
Testing:
Performed local builds for HDFS as well as automated builds against
HDFS and S3, using both IAM roles and explicit AWS_* credentials for
authentication.
Verified that FE tests that parse s3a: URLs are still successful in
all these combinations (when they are run).
Change-Id: I14cd9d4453a91baad3c379aa7e4944993fca95ae
Reviewed-on: http://gerrit.cloudera.org:8080/8294
Reviewed-by: Philip Zeyliger <philip@cloudera.com>
Reviewed-by: Zach Amsden <zamsden@cloudera.com>
Tested-by: Impala Public Jenkins
This patch leverages the AdlFileSystem in Hadoop to allow
Impala to talk to the Azure Data Lake Store. This patch has
functional changes as well as adds test infrastructure for
testing Impala over ADLS.
We do not support ACLs on ADLS since the Hadoop ADLS
connector does not integrate ADLS ACLs with Hadoop users/groups.
For testing, we use the azure-data-lake-store-python client
from Microsoft. This client seems to have some consistency
issues. For example, a drop table through Impala will delete
the files in ADLS, however, listing that directory through
the python client immediately after the drop, will still show
the files. This behavior is unexpected since ADLS claims to be
strongly consistent. Some tests have been skipped due to this
limitation with the tag SkipIfADLS.slow_client. Tracked by
IMPALA-5335.
The azure-data-lake-store-python client also only works on CentOS 6.6
and over, so the python dependencies for Azure will not be downloaded
when the TARGET_FILESYSTEM is not "adls". While running ADLS tests,
the expectation will be that it runs on a machine that is at least
running CentOS 6.6.
Note: This is only a test limitation, not a functional one. Clusters
with older OSes like CentOS 6.4 will still work with ADLS.
Added another dependency to bootstrap_build.sh for the ADLS Python
client.
Testing: Ran core tests with and without TARGET_FILESYSTEM as
'adls' to make sure that all tests pass and that nothing breaks.
Change-Id: Ic56b9988b32a330443f24c44f9cb2c80842f7542
Reviewed-on: http://gerrit.cloudera.org:8080/6910
Tested-by: Impala Public Jenkins
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
By default, Kudu assumes it has 80% of system memory which
is far too high for the minicluster. This sets a mem limit
of 2gb and lowers the limit of the block cache. These values
were tested on a gerrit-verify-dryrun job as well as an
exhaustive run.
This patch also simplifies TestKuduMemLimits which was
unnecessarily creating a large table during test execution.
Change-Id: I7fd7e1cd9dc781aaa672a2c68c845cb57ec885d5
Reviewed-on: http://gerrit.cloudera.org:8080/6844
Reviewed-by: Todd Lipcon <todd@apache.org>
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Impala Public Jenkins
The minicluster setup logic assigned fixed port numbers to several
but not all listening sockets of the data nodes. This change
assigns similar port ranges to all the listening ports that were
so far allowed to pick their own port numbers, interfering with
other components, e.g. HBase.
Change-Id: Iecf312873b7026c52b0ac0e71adbecab181925a0
Reviewed-on: http://gerrit.cloudera.org:8080/6531
Reviewed-by: Michael Brown <mikeb@cloudera.com>
Tested-by: Impala Public Jenkins
We've seen repeated test failures because HBase tries to bind to ports
in the ephemeral port range, which sometimes would already be occupied
by outgoing connections of other proccesses.
This change changes the ports to the new default HBase ports
(HBASE-10123):
HBase Master Port: 60000 -> 16000
HBase Master Web UI Port: 60010 -> 16010
HBase ReqionServer Port: 60020 -> 16020
HBase ReqionServer Web UI Port: 60030 -> 16030
HBase Status Multicast Port: 60100 -> 16100
This made it necessary to change the default KMS port, too
(HADOOP-12811):
KMS HTTP port: 16000 -> 9600
Change-Id: I6f8af325e34b6e352afd75ce5ddd2446ce73d857
Reviewed-on: http://gerrit.cloudera.org:8080/6524
Reviewed-by: Lars Volker <lv@cloudera.com>
Tested-by: Impala Public Jenkins
IMPALA-4553 made ntp-wait succeed before kudu would start, assuming
ntp-wait was installed, in order to prevent a litany of errors on ec2
about unsynchronized clocks. This patch disables that waiting if no
internet connection is detected in order to make it possible to start
the minicluster when offline.
Change-Id: Ifbb5babebb0ca6d2553be1b001e20e2270e052b6
Reviewed-on: http://gerrit.cloudera.org:8080/5412
Reviewed-by: Jim Apple <jbapple-impala@apache.org>
Tested-by: Impala Public Jenkins
When ntpd is not synchronized, kudu initialization fails on the master
node:
F1129 16:37:28.969956 15230 master_main.cc:68] Check failed:
_s.ok() Bad status: Service unavailable: Cannot initialize clock:
Error reading clock. Clock considered unsynchronized
Change-Id: I371e01e21246a8c0ece98ca7d4bf6761615127b4
Reviewed-on: http://gerrit.cloudera.org:8080/5258
Reviewed-by: Jim Apple <jbapple-impala@apache.org>
Tested-by: Impala Public Jenkins
In our IPMC vote to release 2.7.0 rc3, Justing Mclean pointed out a
number of issues of compliance with ASF policy. He asked:
1. "Please place build instruction and supported platforms in the
README. The wiki may change over time and that may make it difficult
to build older versions."
2. Remove binary file llvm-ir/test-loop.bc
3. Add be/src/gutil/valgrind.h,
shell/ext-py/sqlparse-0.1.14/sqlparse/pipeline.py and
cmake_modules/FindJNI.cmake, normalize.css (embedded in bootstrap.css)
to LICENSE.txt
4. Fix be/src/thirdparty/squeasel/squeasel* in LICENSE.txt
5. Remove outdated copyright lines from HBase (see
https://issues.apache.org/jira/browse/HBASE-3870)
6. Remove duplicate jquery notice from LICENSE.txt
Change-Id: I30ff77d7ac28ce67511c200764fba19ae69922e0
Reviewed-on: http://gerrit.cloudera.org:8080/4582
Reviewed-by: Jim Apple <jbapple@cloudera.com>
Tested-by: Internal Jenkins
This change removes some of the occurrences of the strings 'CDH'/'cdh'
from the Impala repository. References to Cloudera-internal Jiras have
been replaced with upstream Jira issues on issues.cloudera.org.
For several categories of occurrences (e.g. pom.xml files,
DOWNLOAD_CDH_COMPONENTS) I also created a list of follow-up Jiras to
remove the occurrences left after this change.
Change-Id: Icb37e2ef0cd9fa0e581d359c5dd3db7812b7b2c8
Reviewed-on: http://gerrit.cloudera.org:8080/4187
Reviewed-by: Jim Apple <jbapple@cloudera.com>
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Alas, poor Llama! I knew him, Impala: a system
of infinite jest, of most excellent fancy: we hath
borne him on our back a thousand times; and now, how
abhorred in my imagination it is!
Done:
* Removed QueryResourceMgr, ResourceBroker, CGroupsMgr
* Removed untested 'offline' mode and NM failure detection from
ImpalaServer
* Removed all Llama-related Thrift files
* Removed RM-related arguments to MemTracker constructors
* Deprecated all RM-related flags, printing a warning if enable_rm is
set
* Removed expansion logic from MemTracker
* Removed VCore logic from QuerySchedule
* Removed all reservation-related logic from Scheduler
* Removed RM metric descriptions
* Various misc. small class changes
Not done:
* Remove RM flags (--enable_rm etc.)
* Remove RM query options
* Changes to RequestPoolService (see IMPALA-4159)
* Remove estimates of VCores / memory from plan
Change-Id: Icfb14209e31f6608bb7b8a33789e00411a6447ef
Reviewed-on: http://gerrit.cloudera.org:8080/4445
Tested-by: Internal Jenkins
Reviewed-by: Henry Robinson <henry@cloudera.com>
For files that have a Cloudera copyright (and no other copyright
notice), make changes to follow the ASF source file header policy here:
http://www.apache.org/legal/src-headers.html#headers
Specifically:
1) Remove the Cloudera copyright.
2) Modify NOTICE.txt according to
http://www.apache.org/legal/src-headers.html#notice
to follow that format and add a line for Cloudera.
3) Replace or add the existing ASF license text with the one given
on the website.
Much of this change was automatically generated via:
git grep -li 'Copyright.*Cloudera' > modified_files.txt
cat modified_files.txt | xargs perl -n -i -e 'print unless m#Copyright.*Cloudera#i;'
cat modified_files_txt | xargs fix_apache_license.py [1]
Some manual fixups were performed following those steps, especially when
license text was completely missing from the file.
[1] https://gist.github.com/anonymous/ff71292094362fc5c594 with minor
modification to ORIG_LICENSE to match Impala's license text.
Change-Id: I2e0bd8420945b953e1b806041bea4d72a3943d86
Reviewed-on: http://gerrit.cloudera.org:8080/3779
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
Currently we don't reset the file read offset if ZCR fails. Due to
this, when we switch to the normal read path, we hit the eosr of
the scan-range even before reading the expected data length. If both
the ReadFromCache() and ReadRange() calls fail without reading any
data, we end up creating a whole list of scan-ranges, each with size
1KB (DEFAULT_READ_PAST_SIZE) assuming we are reading past the scan
range. This gives a huge performance hit. This patch just calls
ScanRange::Close() after the failed cache reads to clean up the
file system state so that the re-reads start from beginning of
the scan range.
This was hit as a part of debugging IMPALA-3679, where the queries
on 1gb cached data were running ~20x slower compared to non-cached
runs.
Change-Id: I0a9ea19dd8571b01d2cd5b87da1c259219f6297a
Reviewed-on: http://gerrit.cloudera.org:8080/3313
Reviewed-by: Michael Brown <mikeb@cloudera.com>
Tested-by: Bharath Vissapragada <bharathv@cloudera.com>
Both `find -executable` and the Bash "&>>" operator are too new to be
supported on RHEL5. Both have reasonable workarounds, so prefer them.
Note that this may not be the exhaustive list of such "modern"
conventions, but RHEL5 isn't working end-to-end, so we can't identify
all of them in a single commit yet.
Testing:
Before, the RHEL5 build would fail quite early here. Now, data load
succeeds and most of the backend tests successfully run.
Change-Id: I7438bed908d8026327923607238808122212d2d8
Reviewed-on: http://gerrit.cloudera.org:8080/3531
Reviewed-by: David Knupp <dknupp@cloudera.com>
Tested-by: Internal Jenkins
This change updates the toolchain bootstrapping script
to download the CDH components (hadoop, hbase, hive, llama,
llama-minikdc and sentry) from the toolchain S3 bucket to
the toolchain directory if the environment variable
$DOWNLOAD_CDH_COMPONENTS is true. By default, it is false
which means the CDH components in the thirdparty directory
will be used instead.
To build the ASF tree(https://git-wip-us.apache.org/repos/asf?p=incubator-impala.git),
set $DOWNLOAD_CDH_COMPONENTS to true. Currently, the CDH
components in S3 are snapshots from the thirdparty directory
at 688d0efcd38731e8e27a8236dbdca21c8fd571a1. Once the integration
jenkins job (impala-cdh5-trunk-core-integration) is modified
to upload the latest stable builds to the S3 buckets, we can
remove the thirdparty directory and always use the CDH components
in the toolchain directory.
Note that bootstrap_toolchain.py will not overwrite existing
directories in the toolchain directory. To force a refresh of
cpmponents in the toolchain directory, a user should delete the
cached copy in the toolchain directory and execute
bootstrap_toolchain.py again. This behavior allows users to
develop locally without network connection once the toolchain
has been bootstrapped.
Change-Id: I16fa79db0005554cc0a116e74775647ba99f8dda
Reviewed-on: http://gerrit.cloudera.org:8080/3333
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
If PID files of each process in the mini cluster get deleted for some
reason, it should still possible to kill them because each process is
marked with "-DIBelongToTheMiniCluster". It turns out that the KMS
process was not being marked. This patch fixes this.
Change-Id: I0398dec94be3ae91548d11a79c1d5eec0ad3dadb
Reviewed-on: http://gerrit.cloudera.org:8080/3354
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Through emprical analysis, it was determined that setting the maximum
number of connections to S3 as 1500 was optimal for functionality and
performance. The hadoop set default of 15 connections could lead us to
have deadlocks as our parquet scanner requires that we have multiple
concurrent open connections proportional to the number of columns that
we are scanning.
Setting it to this high a value does not seem to have any negative
implications.
This has also been found to fix the Error(255): Unknown errors.
Change-Id: Ide6f1326d5155b2e5f4da3a3f23df3f3d40c5a8d
Reviewed-on: http://gerrit.cloudera.org:8080/3114
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
This patch implements a new feature to read the auth_to_local
configs from hdfs configuration files, using the parameter
hadoop.security.auth_to_local. This is done by modifying the
User#getShortName() method to use its hdfs equivalent.
This patch includes an end to end authorization test using
sentry where we add specific auth_to_local setting for a certain
user and test if the sentry authorization passes for this user
after applying these rules. Given we don't have tests that run
on a kerberized min-cluster, this patch adds a hack to load this
configuration during even on non-kerberized 'test runs'.
However this feature is disabled by default to preserve the
existing behavior. To enable it,
1. Use kerberos as authentication mechanism (by setting --principal) and
2. Add "--load_auth_to_local_rules=true" to the cluster startup args
Change-Id: I76485b83c14ba26f6fce66e5f83e8014667829e0
Reviewed-on: http://gerrit.cloudera.org:8080/2800
Reviewed-by: Bharath Vissapragada <bharathv@cloudera.com>
Tested-by: Internal Jenkins
/tmp isn't necessarily on the same filesystem as the Kudu data
directory. Fix the check so that it checks the actual Kudu directory.
Change-Id: Ic6aa27569a0650db7dcf5759952cd50c8e47f8c9
Reviewed-on: http://gerrit.cloudera.org:8080/2967
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
This change whitelists the supported filesystems which can be set
as Default FS for Impala to run on.
This patch configures Impala to use S3 as the default filesystem, rather
than a secondary filesystem as before.
Change-Id: I2f45bef6c94ece634045acb906d12591587ccfed
Reviewed-on: http://gerrit.cloudera.org:8080/1121
Reviewed-by: anujphadke <aphadke@cloudera.com>
Tested-by: Internal Jenkins
Changes:
1) Previously when a service would fail, the user would have to find the
the log file and open it. Now the end of the log is dumped to stdout.
2) Add start, stop, and restart commands to the "admin" script. For
example now you can run
testdata/cluster/admin restart kudu
3) Wait up to 120 seconds for services to shutdown. The timeout is the
same as for the Impala processes. If the services fail to stop an
error will be raised.
Change-Id: I537ea5656df2081d4f1f27a9f3fcef4547fdc2fe
Reviewed-on: http://gerrit.cloudera.org:8080/2751
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
By default Kudu requires the underlying file system to support hole
punching. If support isn't there Kudu will fail to start. People using
such a file system can instead start Kudu with -block_manager=file.
Before starting Kudu in the local mini-cluster, the "fallocate"
command will be used to automatically determine if the special flag is
needed.
Note, users who need this must run bin/create-test-configuration.sh
after pulling in this commit.
This also fixes a bug in the delete_kudu_data() in the cluster admin
script. A directory name was incorrect.
Change-Id: I1ca7fedb367444c41e462b72b0b76091ee94e27c
Reviewed-on: http://gerrit.cloudera.org:8080/2750
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The directory structure of the newer Kudu toolchain artifacts has
changed. Now the root directory is split into /release and /debug. A few
little updates are needed to the build and service scripts.
Since the toolchain no longer provides stubs for platforms that Kudu
doesn't support the stubs need to be generated. This will be done as
part of the toolchain bootstrapping.
Also this upgrades Kudu to 0.8 RC1.
Developers will need to run bin/create-test-configuration.sh after
pulling in this change. Otherwise the Kudu service will fail to start.
Change-Id: I625903bd92afece0ad819a96fc275d5812b5eb2a
Reviewed-on: http://gerrit.cloudera.org:8080/2720
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The hive server does not start for S3 builds because HDFS is marked
as an unsupported service in testdata/cluster/admin; and so HDFS is
not started at all, and so the Hive server is unable to start as well.
Due to this, all our S3 builds fail.
Currently our S3 builds need HDFS to run correctly.
(This has to be reverted once IMPALA-1850 goes in, because then S3 can
run as a default FS without HDFS)
Change-Id: Ibda9dc3ef895c2aa4d39eb5694ac5f2dbd83bee4
Reviewed-on: http://gerrit.cloudera.org:8080/2741
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The Kudu team recommended disabling this for testing purposes. This
should help with timeouts in cloud machines (ec2/gce). Disabling
fsyncs could lead to data loss if the system crashed before the OS had a
chance to write the data to disk. Our test setups don't need that level
of reliability.
Change-Id: I72fd85ce5c4bc71f071b854ea6a9ebe60fc1305f
Reviewed-on: http://gerrit.cloudera.org:8080/2734
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Previously Kudu would only be started when the test configuration was
the standard mini-cluster. That led to failures during data loading when
testing without the mini-cluster (ex: local file system). Kudu doesn't
require any other services so now it'll be started for all test
environments.
Change-Id: I92643ca6ef1acdbf4d4cd2fa5faf9ac97a3f0865
Reviewed-on: http://gerrit.cloudera.org:8080/2690
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The stubs in Impala broke during the merge commit. This commit removes
the stubs in hopes of improving robustness of the build. The original
problem (Kudu clients are only available for some OSs) is now addressed
by moving the stubbing into a dummy Kudu client. The dummy client only
allows linking to succeed, if any client method is called, Impala will
crash. Before calling any such method, Kudu availability must be
checked.
Change-Id: I4bf1c964faf21722137adc4f7ba7f78654f0f712
Reviewed-on: http://gerrit.cloudera.org:8080/2585
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
All logs, test results and SQL files generated during data
loading and testing are now consolidated under a single new
directory $IMPALA_HOME/logs. The goal is to simplify archiving
in Jenkins runs and debugging.
The new structure is as follows:
$IMPALA_HOME/logs/cluster
- logs of Hadoop components and Impala
$IMPALA_HOME/logs/data_loading
- logs and SQL files produced in data loading
$IMPALA_HOME/logs/fe_tests
- logs and test output of Frontend unit tests
$IMPALA_HOME/logs/be_tests
- logs and test output of Backend unit tests
$IMPALA_HOME/logs/ee_tests
- logs and test output of end-to-end tests
$IMPALA_HOME/logs/custom_cluster_tests
- logs and test output of custom cluster tests
I tested this change with a full data load which
was successful.
Change-Id: Ief1f58f3320ec39d31b3c6bc6ef87f58ff7dfdfa
Reviewed-on: http://gerrit.cloudera.org:8080/2456
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
This is for review purposes only. This patch will be merged with David's
big merge patch.
Changes:
1) Make Kudu compilation dependent on the OS since not all OSs support
Kudu.
2) Only run Kudu related tests when Kudu is supported (see #1).
3) Look for Kudu locally, but in a different location. To use a local
build of Kudu, set KUDU_BUILD_DIR to the path Kudu was built in and
set KUDU_CLIENT_DIR to the path KUDU was installed in.
Example:
git clone https://github.com/cloudera/kudu.git
...build 3rd party etc...
mkdir -p $KUDU_BUILD_DIR
cd $KUDU_BUILD_DIR
cmake <path to Kudu source dir>
make
DESTDIR=$KUDU_CLIENT_DIR make install
4) Look for Kudu in the toolchain if not using a local Kudu build.
5) Add Kudu service startup scripts. The Kudu in the toolchain is
actually a parcel that has been renamed (the contents were not
modified in any way), that mean the Kudu service binaries are there.
Those binaries are now used to run the Kudu service.
Change-Id: I3db88cbd27f2ea2394f011bc8d1face37411ed58
This merges the 'feature/kudu' branch with cdh5-trunk as of commit:
055500cc753f87f6d1c70627321fcc825044e183
This patch is not a pure merge patch in the sense that goes beyond conflict
resolution to also address reviews to the 'feature/kudu' branch as a whole.
The review items and their resolution can be inspected at:
http://gerrit.cloudera.org:8080/#/c/1403/
Change-Id: I6dd4270cd17a4f5c02811c343726db3504275a92
The major changes are:
1) Collect backtrace and fatal log on crash.
2) Poll memory usage. The data is only displayed at this time.
3) Support kerberos.
4) Add random queries.
5) Generate random and TPC-H nested data on a remote cluster. The
random data generator was converted to use MR for scaling.
6) Add a cluster abstraction to run data loading for #5 on a
remote or local cluster. This also moves and consolidates some
Cloudera Manager utilities that were in the stress test.
7) Cleanup the wrappers around impyla. That stuff was getting
messy.
Change-Id: I4e4b72dbee1c867626a0b22291dd6462819e35d7
Reviewed-on: http://gerrit.cloudera.org:8080/1298
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Some of the tests rely on hdfs trash mechanism to be enabled and poll
the paths in the trash directory during test runs. These tests are
failing intermittenly due to a race with the hdfs trash checkpointing
mechanism which moves all the trash contents to another directory.
This checkpointing runs every fs.trash.checkpoint.interval minutes
and defaults to fs.trash.interval (when set to 0). Currently there
seems to be no way to disable this checkpointing. This patch increases
the fs.trash.interval from the current value of 30 minutes to 24 hours
so that the test runs never hit this race condition.
Change-Id: I42fcaee70a461712f1df6bac23c71f915718b015
Reviewed-on: http://gerrit.cloudera.org:8080/1703
Reviewed-by: Bharath Vissapragada <bharathv@cloudera.com>
Tested-by: Internal Jenkins
The change to the start script for OSX used "find" with the "-perm
+0111" option as an "executables only" filter but that doesn't work
with newer versions of "find". "-perm +" has been deprecated or removed
(depending on the version) in Linux. I couldn't find a OSX+Linux
compatible filter.
The variable IS_OSX was added and used to choose the appopriate filter.
Change-Id: I0c49f78e816147c820ec539cfc398fb77b83307a
Reviewed-on: http://gerrit.cloudera.org:8080/1630
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Until now, our YARN configuration was broken so that we weren't able to
run local Map Reduce jobs. The jobs would fail with a class not found
exception of the LZO codec. This patch fixes this issues and corrects
the classpath.
Change-Id: I689cca7a079dbd269d4bd96f1b4e3d91147d527c
Reviewed-on: http://gerrit.cloudera.org:8080/1667
Reviewed-by: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
Changes:
1) Consistently use "set -euo pipefail".
2) When an error happens, print the file and line.
3) Consolidated some of the kill scripts.
4) Added better error messages to the load data script.
5) Changed use of #!/bin/sh to bash.
Change-Id: I14fef66c46c1b4461859382ba3fd0dee0fbcdce1
Reviewed-on: http://gerrit.cloudera.org:8080/1620
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
This is for compatibility with docker containers. Before this patch,
when the scripts were run on the docker host, the scripts would try
to kill the mini-cluster in the docker containers and fail because they
didn't have permissions (the user is different). Now the scripts will
only try to kill mini-cluster processes that were started by the current
user.
Also some psutil availability checks were removed because psutil is now
provided by the python virtualenv.
Change-Id: Ida371797bbaffd0a3bd84ab353cb9f466ca510fd
Reviewed-on: http://gerrit.cloudera.org:8080/1541
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
admin was using the -executable flag of find that is not available on
Mac. This patch replaces it with "-perm +0111 -type f" which is similar
semantics. In addition, there seem to be differences in which shell
builtins are available so some changes have been made to fix that issue.
Change-Id: I9b2ecbd5bf6a9b1610e7ca9f15b1a4d1407b94c1
Reviewed-on: http://gerrit.cloudera.org:8080/1612
Reviewed-by: Casey Ching <casey@cloudera.com>
Readability: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
This commit adds PURGE option to DROP TABLE/ALTER TABLE DROP
PARTITION statements. Following is the usage:
1. DROP TABLE <tablename> takes an optional argument PURGE. Adding
purge purges the table data by skipping trash, if configured.
DROP TABLE [<database>.]<tablename> [IF EXISTS] [PURGE]
2. PURGE is also supported with alter table drop partition query
with the following syntax. If specified, impala purges the partition
data by skipping trash.
ALTER TABLE [<database>.]<tablename> DROP PARTITION [IF EXISTS] [PURGE]
This patch also helps the use case where Trash and the data directories
are in different encryption zones, in which case we cannot move the data
during ALTER/DROP. Then purge option can be used to skip the trash and
make sure data is actually deleted.
Change-Id: I64bf71d660b719896c32e0f3a7ab768f30ec7b3b
(cherry picked from commit 585d4f8d9e809f3bf194018dd161a22d3f144270)
Reviewed-on: http://gerrit.cloudera.org:8080/1244
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
Currently we were using the default data directory to store the data
used for Impala tests, without ever formatting it. This is contrary
to how the other Impala data sources behave, i.e. when "--format" is
passed to build-all.sh only Kudu wouldn't be formatted.
This also moves Kudu's data directory inside of the Impala directory
structure, where it's easier to account for it.
Change-Id: Iae2870df0e625de07a761687e75999ef30f2be06
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/7055
Tested-by: jenkins
Reviewed-by: Martin Grund <mgrund@cloudera.com>
This patch enables running Impala tests against Isilon as the default file system. The
intention is to run tests against a realistic deployment, i.e, Isilon replacing HDFS as
the underlying filesystem.
Specifically, it does the following:
- Adds a new environment variable DEFAULT_FS, which points to HDFS by default.
- Makes the fs.defaultFs property in core-site.xml use the DEFAULT_FS environment
variable, such that all clients talk to Isilon implicitly.
- Unset FILESYSTEM_PREFIX when the TARGET_FILESYSTEM is Isilon, since path prefixes
are no longer needed.
- Only starts the Hive Metastore and the Impala service stack when running
tests against Isilon.
We don't start KMS/HBase because they're not relevant to Isilon. We also don't
start YARN, Hive and LLama because hive queries are disabled with Isilon.
The scripts that start/stop Hive, YARN and Llama should be modified to point to a
filesystem other than HDFS in the future.
Change-Id: Id66bfb160fe57f66a64a089b465b536c6c514b63
Reviewed-on: http://gerrit.cloudera.org:8080/449
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Internal Jenkins
Update the yarn-site.xml to reduce the latency of
resource acquisition.
Also changes the log4j properties to reduce the very
verbose logging for the hadoop daemons which was consuming
huge amounts of space very quickly.
Change-Id: I8532fb5125b604974e26ddad76aee93b9c4e64fb
Reviewed-on: http://gerrit.cloudera.org:8080/381
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
This patch enables the Impala test suite to run the end to end tests
against an isilon namenode. There are a few caveats:
- The fe test will currently not work.
- Only loading data from both the test-warehouse snapshot and the metadata snapshot is
supported.
- The test suite cannot be run by multiple people (unless we have access to multiple
isilon namenodes)
Change-Id: I786b4e4f51b99e79ad42abc676f537ebfc189237
Reviewed-on: http://gerrit.cloudera.org:8080/356
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Internal Jenkins
The templates for starting the services of the cluster had a bad
declaration of the shebang that made it impossible to start kms when
using a non-bash default shell.
Change-Id: I6b105b328dc61e71095c2d5e5d6859f65ca56a18
Reviewed-on: http://gerrit.cloudera.org:8080/293
Reviewed-by: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
First change for IMPALA-1209 to address Impala limitations when
using HDFS encryption. This adds a KMS process to the testdata
cluster. This was tested manually by creating a key and an
encryption zone.
Change-Id: I499154506386f04e71c5371b128c10868b1e1318
Reviewed-on: http://gerrit.cloudera.org:8080/41
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
This patch enables loading data to s3 instead of hdfs. It is preliminary in nature,
as such, there are a few caveats:
- The fe tests do not work.
- Only loading from a test-warehouse snapshot and metastore snapshot is enabled.
- Until hive works with s3, only a subset of all the tests will work.
Change-Id: Ia66a5f836b4245e3b022a49de805eec337a51324
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5851
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: jenkins