IMPALA-4553 made ntp-wait succeed before kudu would start, assuming
ntp-wait was installed, in order to prevent a litany of errors on ec2
about unsynchronized clocks. This patch disables that waiting if no
internet connection is detected in order to make it possible to start
the minicluster when offline.
Change-Id: Ifbb5babebb0ca6d2553be1b001e20e2270e052b6
Reviewed-on: http://gerrit.cloudera.org:8080/5412
Reviewed-by: Jim Apple <jbapple-impala@apache.org>
Tested-by: Impala Public Jenkins
When ntpd is not synchronized, kudu initialization fails on the master
node:
F1129 16:37:28.969956 15230 master_main.cc:68] Check failed:
_s.ok() Bad status: Service unavailable: Cannot initialize clock:
Error reading clock. Clock considered unsynchronized
Change-Id: I371e01e21246a8c0ece98ca7d4bf6761615127b4
Reviewed-on: http://gerrit.cloudera.org:8080/5258
Reviewed-by: Jim Apple <jbapple-impala@apache.org>
Tested-by: Impala Public Jenkins
In our IPMC vote to release 2.7.0 rc3, Justing Mclean pointed out a
number of issues of compliance with ASF policy. He asked:
1. "Please place build instruction and supported platforms in the
README. The wiki may change over time and that may make it difficult
to build older versions."
2. Remove binary file llvm-ir/test-loop.bc
3. Add be/src/gutil/valgrind.h,
shell/ext-py/sqlparse-0.1.14/sqlparse/pipeline.py and
cmake_modules/FindJNI.cmake, normalize.css (embedded in bootstrap.css)
to LICENSE.txt
4. Fix be/src/thirdparty/squeasel/squeasel* in LICENSE.txt
5. Remove outdated copyright lines from HBase (see
https://issues.apache.org/jira/browse/HBASE-3870)
6. Remove duplicate jquery notice from LICENSE.txt
Change-Id: I30ff77d7ac28ce67511c200764fba19ae69922e0
Reviewed-on: http://gerrit.cloudera.org:8080/4582
Reviewed-by: Jim Apple <jbapple@cloudera.com>
Tested-by: Internal Jenkins
This change removes some of the occurrences of the strings 'CDH'/'cdh'
from the Impala repository. References to Cloudera-internal Jiras have
been replaced with upstream Jira issues on issues.cloudera.org.
For several categories of occurrences (e.g. pom.xml files,
DOWNLOAD_CDH_COMPONENTS) I also created a list of follow-up Jiras to
remove the occurrences left after this change.
Change-Id: Icb37e2ef0cd9fa0e581d359c5dd3db7812b7b2c8
Reviewed-on: http://gerrit.cloudera.org:8080/4187
Reviewed-by: Jim Apple <jbapple@cloudera.com>
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
Alas, poor Llama! I knew him, Impala: a system
of infinite jest, of most excellent fancy: we hath
borne him on our back a thousand times; and now, how
abhorred in my imagination it is!
Done:
* Removed QueryResourceMgr, ResourceBroker, CGroupsMgr
* Removed untested 'offline' mode and NM failure detection from
ImpalaServer
* Removed all Llama-related Thrift files
* Removed RM-related arguments to MemTracker constructors
* Deprecated all RM-related flags, printing a warning if enable_rm is
set
* Removed expansion logic from MemTracker
* Removed VCore logic from QuerySchedule
* Removed all reservation-related logic from Scheduler
* Removed RM metric descriptions
* Various misc. small class changes
Not done:
* Remove RM flags (--enable_rm etc.)
* Remove RM query options
* Changes to RequestPoolService (see IMPALA-4159)
* Remove estimates of VCores / memory from plan
Change-Id: Icfb14209e31f6608bb7b8a33789e00411a6447ef
Reviewed-on: http://gerrit.cloudera.org:8080/4445
Tested-by: Internal Jenkins
Reviewed-by: Henry Robinson <henry@cloudera.com>
For files that have a Cloudera copyright (and no other copyright
notice), make changes to follow the ASF source file header policy here:
http://www.apache.org/legal/src-headers.html#headers
Specifically:
1) Remove the Cloudera copyright.
2) Modify NOTICE.txt according to
http://www.apache.org/legal/src-headers.html#notice
to follow that format and add a line for Cloudera.
3) Replace or add the existing ASF license text with the one given
on the website.
Much of this change was automatically generated via:
git grep -li 'Copyright.*Cloudera' > modified_files.txt
cat modified_files.txt | xargs perl -n -i -e 'print unless m#Copyright.*Cloudera#i;'
cat modified_files_txt | xargs fix_apache_license.py [1]
Some manual fixups were performed following those steps, especially when
license text was completely missing from the file.
[1] https://gist.github.com/anonymous/ff71292094362fc5c594 with minor
modification to ORIG_LICENSE to match Impala's license text.
Change-Id: I2e0bd8420945b953e1b806041bea4d72a3943d86
Reviewed-on: http://gerrit.cloudera.org:8080/3779
Reviewed-by: Dan Hecht <dhecht@cloudera.com>
Tested-by: Internal Jenkins
Currently we don't reset the file read offset if ZCR fails. Due to
this, when we switch to the normal read path, we hit the eosr of
the scan-range even before reading the expected data length. If both
the ReadFromCache() and ReadRange() calls fail without reading any
data, we end up creating a whole list of scan-ranges, each with size
1KB (DEFAULT_READ_PAST_SIZE) assuming we are reading past the scan
range. This gives a huge performance hit. This patch just calls
ScanRange::Close() after the failed cache reads to clean up the
file system state so that the re-reads start from beginning of
the scan range.
This was hit as a part of debugging IMPALA-3679, where the queries
on 1gb cached data were running ~20x slower compared to non-cached
runs.
Change-Id: I0a9ea19dd8571b01d2cd5b87da1c259219f6297a
Reviewed-on: http://gerrit.cloudera.org:8080/3313
Reviewed-by: Michael Brown <mikeb@cloudera.com>
Tested-by: Bharath Vissapragada <bharathv@cloudera.com>
Both `find -executable` and the Bash "&>>" operator are too new to be
supported on RHEL5. Both have reasonable workarounds, so prefer them.
Note that this may not be the exhaustive list of such "modern"
conventions, but RHEL5 isn't working end-to-end, so we can't identify
all of them in a single commit yet.
Testing:
Before, the RHEL5 build would fail quite early here. Now, data load
succeeds and most of the backend tests successfully run.
Change-Id: I7438bed908d8026327923607238808122212d2d8
Reviewed-on: http://gerrit.cloudera.org:8080/3531
Reviewed-by: David Knupp <dknupp@cloudera.com>
Tested-by: Internal Jenkins
This change updates the toolchain bootstrapping script
to download the CDH components (hadoop, hbase, hive, llama,
llama-minikdc and sentry) from the toolchain S3 bucket to
the toolchain directory if the environment variable
$DOWNLOAD_CDH_COMPONENTS is true. By default, it is false
which means the CDH components in the thirdparty directory
will be used instead.
To build the ASF tree(https://git-wip-us.apache.org/repos/asf?p=incubator-impala.git),
set $DOWNLOAD_CDH_COMPONENTS to true. Currently, the CDH
components in S3 are snapshots from the thirdparty directory
at 688d0efcd38731e8e27a8236dbdca21c8fd571a1. Once the integration
jenkins job (impala-cdh5-trunk-core-integration) is modified
to upload the latest stable builds to the S3 buckets, we can
remove the thirdparty directory and always use the CDH components
in the toolchain directory.
Note that bootstrap_toolchain.py will not overwrite existing
directories in the toolchain directory. To force a refresh of
cpmponents in the toolchain directory, a user should delete the
cached copy in the toolchain directory and execute
bootstrap_toolchain.py again. This behavior allows users to
develop locally without network connection once the toolchain
has been bootstrapped.
Change-Id: I16fa79db0005554cc0a116e74775647ba99f8dda
Reviewed-on: http://gerrit.cloudera.org:8080/3333
Reviewed-by: Michael Ho <kwho@cloudera.com>
Tested-by: Internal Jenkins
If PID files of each process in the mini cluster get deleted for some
reason, it should still possible to kill them because each process is
marked with "-DIBelongToTheMiniCluster". It turns out that the KMS
process was not being marked. This patch fixes this.
Change-Id: I0398dec94be3ae91548d11a79c1d5eec0ad3dadb
Reviewed-on: http://gerrit.cloudera.org:8080/3354
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Through emprical analysis, it was determined that setting the maximum
number of connections to S3 as 1500 was optimal for functionality and
performance. The hadoop set default of 15 connections could lead us to
have deadlocks as our parquet scanner requires that we have multiple
concurrent open connections proportional to the number of columns that
we are scanning.
Setting it to this high a value does not seem to have any negative
implications.
This has also been found to fix the Error(255): Unknown errors.
Change-Id: Ide6f1326d5155b2e5f4da3a3f23df3f3d40c5a8d
Reviewed-on: http://gerrit.cloudera.org:8080/3114
Reviewed-by: Sailesh Mukil <sailesh@cloudera.com>
Tested-by: Internal Jenkins
This patch implements a new feature to read the auth_to_local
configs from hdfs configuration files, using the parameter
hadoop.security.auth_to_local. This is done by modifying the
User#getShortName() method to use its hdfs equivalent.
This patch includes an end to end authorization test using
sentry where we add specific auth_to_local setting for a certain
user and test if the sentry authorization passes for this user
after applying these rules. Given we don't have tests that run
on a kerberized min-cluster, this patch adds a hack to load this
configuration during even on non-kerberized 'test runs'.
However this feature is disabled by default to preserve the
existing behavior. To enable it,
1. Use kerberos as authentication mechanism (by setting --principal) and
2. Add "--load_auth_to_local_rules=true" to the cluster startup args
Change-Id: I76485b83c14ba26f6fce66e5f83e8014667829e0
Reviewed-on: http://gerrit.cloudera.org:8080/2800
Reviewed-by: Bharath Vissapragada <bharathv@cloudera.com>
Tested-by: Internal Jenkins
/tmp isn't necessarily on the same filesystem as the Kudu data
directory. Fix the check so that it checks the actual Kudu directory.
Change-Id: Ic6aa27569a0650db7dcf5759952cd50c8e47f8c9
Reviewed-on: http://gerrit.cloudera.org:8080/2967
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
This change whitelists the supported filesystems which can be set
as Default FS for Impala to run on.
This patch configures Impala to use S3 as the default filesystem, rather
than a secondary filesystem as before.
Change-Id: I2f45bef6c94ece634045acb906d12591587ccfed
Reviewed-on: http://gerrit.cloudera.org:8080/1121
Reviewed-by: anujphadke <aphadke@cloudera.com>
Tested-by: Internal Jenkins
Changes:
1) Previously when a service would fail, the user would have to find the
the log file and open it. Now the end of the log is dumped to stdout.
2) Add start, stop, and restart commands to the "admin" script. For
example now you can run
testdata/cluster/admin restart kudu
3) Wait up to 120 seconds for services to shutdown. The timeout is the
same as for the Impala processes. If the services fail to stop an
error will be raised.
Change-Id: I537ea5656df2081d4f1f27a9f3fcef4547fdc2fe
Reviewed-on: http://gerrit.cloudera.org:8080/2751
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
By default Kudu requires the underlying file system to support hole
punching. If support isn't there Kudu will fail to start. People using
such a file system can instead start Kudu with -block_manager=file.
Before starting Kudu in the local mini-cluster, the "fallocate"
command will be used to automatically determine if the special flag is
needed.
Note, users who need this must run bin/create-test-configuration.sh
after pulling in this commit.
This also fixes a bug in the delete_kudu_data() in the cluster admin
script. A directory name was incorrect.
Change-Id: I1ca7fedb367444c41e462b72b0b76091ee94e27c
Reviewed-on: http://gerrit.cloudera.org:8080/2750
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The directory structure of the newer Kudu toolchain artifacts has
changed. Now the root directory is split into /release and /debug. A few
little updates are needed to the build and service scripts.
Since the toolchain no longer provides stubs for platforms that Kudu
doesn't support the stubs need to be generated. This will be done as
part of the toolchain bootstrapping.
Also this upgrades Kudu to 0.8 RC1.
Developers will need to run bin/create-test-configuration.sh after
pulling in this change. Otherwise the Kudu service will fail to start.
Change-Id: I625903bd92afece0ad819a96fc275d5812b5eb2a
Reviewed-on: http://gerrit.cloudera.org:8080/2720
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The hive server does not start for S3 builds because HDFS is marked
as an unsupported service in testdata/cluster/admin; and so HDFS is
not started at all, and so the Hive server is unable to start as well.
Due to this, all our S3 builds fail.
Currently our S3 builds need HDFS to run correctly.
(This has to be reverted once IMPALA-1850 goes in, because then S3 can
run as a default FS without HDFS)
Change-Id: Ibda9dc3ef895c2aa4d39eb5694ac5f2dbd83bee4
Reviewed-on: http://gerrit.cloudera.org:8080/2741
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The Kudu team recommended disabling this for testing purposes. This
should help with timeouts in cloud machines (ec2/gce). Disabling
fsyncs could lead to data loss if the system crashed before the OS had a
chance to write the data to disk. Our test setups don't need that level
of reliability.
Change-Id: I72fd85ce5c4bc71f071b854ea6a9ebe60fc1305f
Reviewed-on: http://gerrit.cloudera.org:8080/2734
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Previously Kudu would only be started when the test configuration was
the standard mini-cluster. That led to failures during data loading when
testing without the mini-cluster (ex: local file system). Kudu doesn't
require any other services so now it'll be started for all test
environments.
Change-Id: I92643ca6ef1acdbf4d4cd2fa5faf9ac97a3f0865
Reviewed-on: http://gerrit.cloudera.org:8080/2690
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
The stubs in Impala broke during the merge commit. This commit removes
the stubs in hopes of improving robustness of the build. The original
problem (Kudu clients are only available for some OSs) is now addressed
by moving the stubbing into a dummy Kudu client. The dummy client only
allows linking to succeed, if any client method is called, Impala will
crash. Before calling any such method, Kudu availability must be
checked.
Change-Id: I4bf1c964faf21722137adc4f7ba7f78654f0f712
Reviewed-on: http://gerrit.cloudera.org:8080/2585
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
All logs, test results and SQL files generated during data
loading and testing are now consolidated under a single new
directory $IMPALA_HOME/logs. The goal is to simplify archiving
in Jenkins runs and debugging.
The new structure is as follows:
$IMPALA_HOME/logs/cluster
- logs of Hadoop components and Impala
$IMPALA_HOME/logs/data_loading
- logs and SQL files produced in data loading
$IMPALA_HOME/logs/fe_tests
- logs and test output of Frontend unit tests
$IMPALA_HOME/logs/be_tests
- logs and test output of Backend unit tests
$IMPALA_HOME/logs/ee_tests
- logs and test output of end-to-end tests
$IMPALA_HOME/logs/custom_cluster_tests
- logs and test output of custom cluster tests
I tested this change with a full data load which
was successful.
Change-Id: Ief1f58f3320ec39d31b3c6bc6ef87f58ff7dfdfa
Reviewed-on: http://gerrit.cloudera.org:8080/2456
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: Internal Jenkins
This is for review purposes only. This patch will be merged with David's
big merge patch.
Changes:
1) Make Kudu compilation dependent on the OS since not all OSs support
Kudu.
2) Only run Kudu related tests when Kudu is supported (see #1).
3) Look for Kudu locally, but in a different location. To use a local
build of Kudu, set KUDU_BUILD_DIR to the path Kudu was built in and
set KUDU_CLIENT_DIR to the path KUDU was installed in.
Example:
git clone https://github.com/cloudera/kudu.git
...build 3rd party etc...
mkdir -p $KUDU_BUILD_DIR
cd $KUDU_BUILD_DIR
cmake <path to Kudu source dir>
make
DESTDIR=$KUDU_CLIENT_DIR make install
4) Look for Kudu in the toolchain if not using a local Kudu build.
5) Add Kudu service startup scripts. The Kudu in the toolchain is
actually a parcel that has been renamed (the contents were not
modified in any way), that mean the Kudu service binaries are there.
Those binaries are now used to run the Kudu service.
Change-Id: I3db88cbd27f2ea2394f011bc8d1face37411ed58
This merges the 'feature/kudu' branch with cdh5-trunk as of commit:
055500cc753f87f6d1c70627321fcc825044e183
This patch is not a pure merge patch in the sense that goes beyond conflict
resolution to also address reviews to the 'feature/kudu' branch as a whole.
The review items and their resolution can be inspected at:
http://gerrit.cloudera.org:8080/#/c/1403/
Change-Id: I6dd4270cd17a4f5c02811c343726db3504275a92
The major changes are:
1) Collect backtrace and fatal log on crash.
2) Poll memory usage. The data is only displayed at this time.
3) Support kerberos.
4) Add random queries.
5) Generate random and TPC-H nested data on a remote cluster. The
random data generator was converted to use MR for scaling.
6) Add a cluster abstraction to run data loading for #5 on a
remote or local cluster. This also moves and consolidates some
Cloudera Manager utilities that were in the stress test.
7) Cleanup the wrappers around impyla. That stuff was getting
messy.
Change-Id: I4e4b72dbee1c867626a0b22291dd6462819e35d7
Reviewed-on: http://gerrit.cloudera.org:8080/1298
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Some of the tests rely on hdfs trash mechanism to be enabled and poll
the paths in the trash directory during test runs. These tests are
failing intermittenly due to a race with the hdfs trash checkpointing
mechanism which moves all the trash contents to another directory.
This checkpointing runs every fs.trash.checkpoint.interval minutes
and defaults to fs.trash.interval (when set to 0). Currently there
seems to be no way to disable this checkpointing. This patch increases
the fs.trash.interval from the current value of 30 minutes to 24 hours
so that the test runs never hit this race condition.
Change-Id: I42fcaee70a461712f1df6bac23c71f915718b015
Reviewed-on: http://gerrit.cloudera.org:8080/1703
Reviewed-by: Bharath Vissapragada <bharathv@cloudera.com>
Tested-by: Internal Jenkins
The change to the start script for OSX used "find" with the "-perm
+0111" option as an "executables only" filter but that doesn't work
with newer versions of "find". "-perm +" has been deprecated or removed
(depending on the version) in Linux. I couldn't find a OSX+Linux
compatible filter.
The variable IS_OSX was added and used to choose the appopriate filter.
Change-Id: I0c49f78e816147c820ec539cfc398fb77b83307a
Reviewed-on: http://gerrit.cloudera.org:8080/1630
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
Until now, our YARN configuration was broken so that we weren't able to
run local Map Reduce jobs. The jobs would fail with a class not found
exception of the LZO codec. This patch fixes this issues and corrects
the classpath.
Change-Id: I689cca7a079dbd269d4bd96f1b4e3d91147d527c
Reviewed-on: http://gerrit.cloudera.org:8080/1667
Reviewed-by: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
Changes:
1) Consistently use "set -euo pipefail".
2) When an error happens, print the file and line.
3) Consolidated some of the kill scripts.
4) Added better error messages to the load data script.
5) Changed use of #!/bin/sh to bash.
Change-Id: I14fef66c46c1b4461859382ba3fd0dee0fbcdce1
Reviewed-on: http://gerrit.cloudera.org:8080/1620
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
This is for compatibility with docker containers. Before this patch,
when the scripts were run on the docker host, the scripts would try
to kill the mini-cluster in the docker containers and fail because they
didn't have permissions (the user is different). Now the scripts will
only try to kill mini-cluster processes that were started by the current
user.
Also some psutil availability checks were removed because psutil is now
provided by the python virtualenv.
Change-Id: Ida371797bbaffd0a3bd84ab353cb9f466ca510fd
Reviewed-on: http://gerrit.cloudera.org:8080/1541
Reviewed-by: Casey Ching <casey@cloudera.com>
Tested-by: Internal Jenkins
admin was using the -executable flag of find that is not available on
Mac. This patch replaces it with "-perm +0111 -type f" which is similar
semantics. In addition, there seem to be differences in which shell
builtins are available so some changes have been made to fix that issue.
Change-Id: I9b2ecbd5bf6a9b1610e7ca9f15b1a4d1407b94c1
Reviewed-on: http://gerrit.cloudera.org:8080/1612
Reviewed-by: Casey Ching <casey@cloudera.com>
Readability: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
This commit adds PURGE option to DROP TABLE/ALTER TABLE DROP
PARTITION statements. Following is the usage:
1. DROP TABLE <tablename> takes an optional argument PURGE. Adding
purge purges the table data by skipping trash, if configured.
DROP TABLE [<database>.]<tablename> [IF EXISTS] [PURGE]
2. PURGE is also supported with alter table drop partition query
with the following syntax. If specified, impala purges the partition
data by skipping trash.
ALTER TABLE [<database>.]<tablename> DROP PARTITION [IF EXISTS] [PURGE]
This patch also helps the use case where Trash and the data directories
are in different encryption zones, in which case we cannot move the data
during ALTER/DROP. Then purge option can be used to skip the trash and
make sure data is actually deleted.
Change-Id: I64bf71d660b719896c32e0f3a7ab768f30ec7b3b
(cherry picked from commit 585d4f8d9e809f3bf194018dd161a22d3f144270)
Reviewed-on: http://gerrit.cloudera.org:8080/1244
Reviewed-by: Juan Yu <jyu@cloudera.com>
Tested-by: Internal Jenkins
Currently we were using the default data directory to store the data
used for Impala tests, without ever formatting it. This is contrary
to how the other Impala data sources behave, i.e. when "--format" is
passed to build-all.sh only Kudu wouldn't be formatted.
This also moves Kudu's data directory inside of the Impala directory
structure, where it's easier to account for it.
Change-Id: Iae2870df0e625de07a761687e75999ef30f2be06
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/7055
Tested-by: jenkins
Reviewed-by: Martin Grund <mgrund@cloudera.com>
This patch enables running Impala tests against Isilon as the default file system. The
intention is to run tests against a realistic deployment, i.e, Isilon replacing HDFS as
the underlying filesystem.
Specifically, it does the following:
- Adds a new environment variable DEFAULT_FS, which points to HDFS by default.
- Makes the fs.defaultFs property in core-site.xml use the DEFAULT_FS environment
variable, such that all clients talk to Isilon implicitly.
- Unset FILESYSTEM_PREFIX when the TARGET_FILESYSTEM is Isilon, since path prefixes
are no longer needed.
- Only starts the Hive Metastore and the Impala service stack when running
tests against Isilon.
We don't start KMS/HBase because they're not relevant to Isilon. We also don't
start YARN, Hive and LLama because hive queries are disabled with Isilon.
The scripts that start/stop Hive, YARN and Llama should be modified to point to a
filesystem other than HDFS in the future.
Change-Id: Id66bfb160fe57f66a64a089b465b536c6c514b63
Reviewed-on: http://gerrit.cloudera.org:8080/449
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Internal Jenkins
Update the yarn-site.xml to reduce the latency of
resource acquisition.
Also changes the log4j properties to reduce the very
verbose logging for the hadoop daemons which was consuming
huge amounts of space very quickly.
Change-Id: I8532fb5125b604974e26ddad76aee93b9c4e64fb
Reviewed-on: http://gerrit.cloudera.org:8080/381
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
This patch enables the Impala test suite to run the end to end tests
against an isilon namenode. There are a few caveats:
- The fe test will currently not work.
- Only loading data from both the test-warehouse snapshot and the metadata snapshot is
supported.
- The test suite cannot be run by multiple people (unless we have access to multiple
isilon namenodes)
Change-Id: I786b4e4f51b99e79ad42abc676f537ebfc189237
Reviewed-on: http://gerrit.cloudera.org:8080/356
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Internal Jenkins
The templates for starting the services of the cluster had a bad
declaration of the shebang that made it impossible to start kms when
using a non-bash default shell.
Change-Id: I6b105b328dc61e71095c2d5e5d6859f65ca56a18
Reviewed-on: http://gerrit.cloudera.org:8080/293
Reviewed-by: Martin Grund <mgrund@cloudera.com>
Tested-by: Internal Jenkins
First change for IMPALA-1209 to address Impala limitations when
using HDFS encryption. This adds a KMS process to the testdata
cluster. This was tested manually by creating a key and an
encryption zone.
Change-Id: I499154506386f04e71c5371b128c10868b1e1318
Reviewed-on: http://gerrit.cloudera.org:8080/41
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: Internal Jenkins
This patch enables loading data to s3 instead of hdfs. It is preliminary in nature,
as such, there are a few caveats:
- The fe tests do not work.
- Only loading from a test-warehouse snapshot and metastore snapshot is enabled.
- Until hive works with s3, only a subset of all the tests will work.
Change-Id: Ia66a5f836b4245e3b022a49de805eec337a51324
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5851
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: jenkins
This will allow you to create tables around data that already exists on
S3. (Though INSERT and LOAD DATA don't support S3 yet). Also this will
make it easier to create some test tables that are not on HDFS.
Also, workaround HDFS-7031 (which is a "won't fix") where non-defaultFS
paths can be qualified with the wrong authority. This is needed for
Impala now that it can take non-HDFS paths as input.
Change-Id: Ie513d50b26dfe5a71be284ad31a8c8151d0e30d3
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/5417
Reviewed-by: Daniel Hecht <dhecht@cloudera.com>
Tested-by: jenkins
Prior to this work, the impalad could either authenticate with
Kerberos, or authenticate with LDAP. This fixes that so that both can
co-exist in the same daemon. Prior code had both a
KerberosAuthProvider and an LdapAuthProvider; this is refactored into
a single SaslAuthProvider that potentially contains both LDAP and
Kerberos.
The terminology of "client facing" and "server facing" has been
replaced with "external" and "internal". External is for clients like
the impala shell, odbc, jdbc, etc. Internal is for daemon <-> daemon
communication.
The notion of the "auxprop" plugin is removed, as that was dead code.
The Thrift code is enhanced to pass the Realm information from the
SaslAuthProvider down to the underlying SASL library.
Change-Id: I0a0b968a107c0b25610ca37295c3fee345ecdd6d
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/4051
Reviewed-by: Michael Yoder <myoder@cloudera.com>
Tested-by: jenkins
This is the first iteration of a kerberized development environment.
All the daemons start and use kerberos, with the sole exception of the
hive metastore. This is sufficient to test impala authentication.
When buildall.sh is run using '-kerberize', it will stop before
loading data or attempting to run tests.
Loading data into the cluster is known to not work at this time, the
root causes being that Beeline -> HiveServer2 -> MapReduce throws
errors, and Beeline -> HiveServer2 -> HBase has problems. These are
left for later work.
However, the impala daemons will happily authenticate using kerberos
both from clients (like the impala shell) and amongst each other.
This means that if you can get data into the mini-cluster, you could
query it.
Usage:
* Supply a '-kerberize' option to buildall.sh, or
* Supply a '-kerberize' option to create-test-configuration.sh, then
'run-all.sh -format', re-source impala-config.sh, and then start
impala daemons as usual. You must reformat the cluster because
kerberizing it will change all the ownership of all files in HDFS.
Notable changes:
* Added clean start/stop script for the llama-minikdc
* Creation of Kerberized HDFS - namenode and datanodes
* Kerberized HBase (and Zookeeper)
* Kerberized Hive (minus the MetaStore)
* Kerberized Impala
* Loading of data very nearly working
Still to go:
* Kerberize the MetaStore
* Get data loading working
* Run all tests
* The unknown unknowns
* Extensive testing
Change-Id: Iee3f56f6cc28303821fc6a3bf3ca7f5933632160
Reviewed-on: http://gerrit.sjc.cloudera.com:8080/4019
Reviewed-by: Michael Yoder <myoder@cloudera.com>
Tested-by: jenkins
Our testdata/run-all.sh can be brittle depending on the state of your Hdfs.
In particular, Yarn depends on the NN not being in safe mode, but it may take
some time for the NN to exit safe mode immediately after starting Hdfs.
This patch makes the NN startup script complete only after the NN has exited
safe mode.
Change-Id: I8b30cd07128dc48d79d91726eafed4174fb91a6d
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3005
Reviewed-by: Alex Behm <alex.behm@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3021