This is because the default HBase/Zookeeper client timeouts are meant to be very long:
* If both Zookeeper and HBase are down, it will wait:
hbase.client.retries.number(default 10) x hbase.rpc.timeout (default 60s)
* If ZK is up but HBase is down, but it has run at least once, then it waits:
hbase.client.retries.number * some exponential backoff time (default sleep time is 1 second,
backoff tables looks like this: 1, 1, 1, 2, 2, 4, 4, 8, 16, 32, 64).
In my experiments, it takes ~20-25minutes if HBase before the table loading will fail. If there
are many HBase tables this can block all loading threads.
The fix is to change the default timeout values to fail faster. These values were suggested
by somoene from the HBase team. With these values we will fail in ~1 minute. I am working with the CM team to get the defaults changed there as well.
Change-Id: I625e35af57374c72d50d03372d177624ce67694a
Reviewed-on: http://gerrit.ent.cloudera.com:8080/1903
Reviewed-by: Nong Li <nong@cloudera.com>
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: Lenni Kuff <lskuff@cloudera.com>
(cherry picked from commit dcbd4db64a0d764f5caf06ba87c9b90ab643f0d7)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2884
The SHOW DATA SOURCE tests were run as part of the other SHOW * tests
in test_show(), but the setup/cleanup for data sources can't be run
in parallel. This change moves the SHOW DATA SOURCE tests into a separate
test method and the setup/cleanup code is only run for this test (i.e.
not using setup_method() and teardown_method()). The test is then
only executed serially.
Change-Id: I221145f49cfe7290e132c6a87a5295b747c1fcc7
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2864
Reviewed-by: Matthew Jacobs <mj@cloudera.com>
Tested-by: jenkins
(cherry picked from commit 5bcd769eae3a694d7f6f42d093f9197e8a4e8b77)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2870
This commit contains the final set of changes for improving the
performance of partition pruning. For each HdfsTable, we materialize a
set of partition value metadata that allows the efficient evaluation of
simple predicates on partition attributes without invoking the BE. These
changes result in three orders of magnitude performance improvement
during partition pruning.
Change-Id: I5b405f0f45a470f2ba7b2191e0d46632c354d5ae
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2700
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2823
TSSLSocket should not be "opened" if it's used by the server. See TSSLSocket::open()
Therefore, in TSaslTransport::open(), it should not open underlying transport if it's a sever.
I've tested it manually with LDAP and LDAP+SSL, but we don't have functional test for LDAP yet.
Change-Id: Ifee4957c6a874df47760d33ab50aa90eb7eda617
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2718
Reviewed-by: Alan Choi <alan@cloudera.com>
Tested-by: jenkins
(cherry picked from commit 47831dbe40da8db7503f42cbde1426a498ac68fd)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2813
Reviewed-by: Henry Robinson <henry@cloudera.com>
This change adds support for authorizing based on policy metadata read from the Sentry
Service. Authorization is role based and roles are granted to user groups. Each role
can have zero or more privileges associated with it, granting fine grained access to
specific catalog objects at server, URI, database, or table scope. This patch only
adds support to authorize against metadata read from the Sentry Policy Service, it does
not add support for GRANT/REVOKE statements in Impala.
The authorization metadata is read by the catalog server from the Sentry Service and
propagated to all nodes in the cluster in the "catalog-update" statestore topic. To
enable the Catalog Server to read policy metadata, the --sentry_config must be
set to a valid sentry-site.xml config file.
On the impalad side, we continue to support authorization based on a file-based provider.
To enable file based authorization set the --authorization_policy_file to a
non-empty value. If --authorization_policy_file is not set, authorization will be done
based on cached policy metadata received from the Catalog Server (via the statestore).
TODO: There are still some issues with the Sentry Service that require disabling some of
the authorization tests and adding some workarounds. I have added comments in the code
where these workarounds are needed.
Change-Id: I3765748d2cdbe00f59eefa3c971558efede38eb1
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2552
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: Lenni Kuff <lskuff@cloudera.com>
The HLL update and merge functions are cross-compiled and called in
the codegen'd UpdateSlot() function. (The UdfContext functions are
also cross-compiled so they can be inlined.) This speeds up NDV
calculation 2-3x.
Change-Id: Ia0de5e231e4520097ee1a4df8a3dfda5b1843738
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2732
Reviewed-by: Skye Wanderman-Milne <skye@cloudera.com>
Tested-by: jenkins
(cherry picked from commit 9f8113403d70a053b088a014976e513765f374a7)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2776
Float/Doubles are lossy so using those as the default literal type
is problematic.
Change-Id: I5a619dd931d576e2e6cd7774139e9bafb9452db9
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2758
Reviewed-by: Nong Li <nong@cloudera.com>
Tested-by: jenkins
Not quite sure what the underlying issue is but these fixes seem to work.
Change-Id: I759804eb8338ba86969c0214a1e6e35588c94297
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2726
Tested-by: jenkins
Reviewed-by: Nong Li <nong@cloudera.com>
This optimization is generally not safe since the probe side is still streaming. The
join node could acquire all of the data from the child into its own pool but then
there's no real point in doing this (doesn't lead to lower memory footprint and just
makes the mem accounting harder to reason about).
This is exposed in busy plans.
Change-Id: I37b0f6507dc67c79e5ebe8b9242ec86f28ddad41
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2747
Reviewed-by: Nong Li <nong@cloudera.com>
Tested-by: jenkins
Instead of doing the initialization in Catalog.java, there is now a special
BuiltinsDb that handles this initialization. This makes in more clear what file
should be modified when adding a new builtin and also cuts down the code in the Catalog.
Change-Id: I4512abff6e8c7f4924701aeffe10e656104a0b86
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2567
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
(cherry picked from commit 761a8728de309a20c077913aa154c6259d29d1e8)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2644
Currently, we coalesce the results and do not properly catch a failure if one of the
threads has a failed query and exit_on_error is set to True. This patch ensures that we
exit before the next query is run.
Change-Id: Ie650e0f547874386c79c78982ea9916f33e18cda
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2654
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: jenkins
This change adds DDL support for HDFS caching. The DDL allows the user to indicate a
table or partition should be cached and which pool to cache the data into:
* Create a cached table: CREATE TABLE ... CACHED IN 'poolName'
* Cache a table/partition: ALTER TABLE ... [partitionSpec] SET CACHED IN 'poolName'
* Uncache a table/partition: ALTER TABLE ... [partitionSpec] SET UNCACHED
When a table/partition is marked as cached, a new HDFS caching request is submitted
to cache the location (HDFS path) of the table/partition and the ID of that request
is stored with in the table metadata (in the table properties). This is stored as:
'cache_directive_id'='<requestId>'. The cache requests and IDs are managed by HDFS
and persisted across HDFS restarts.
When a cached table or partition is dropped it is important to uncache the cached data
(drop the associated cache request). For partitioned tables, this means dropping all
cache requests from all cached partitions in the table.
Likewise, if a partitioned table is created as cached, new partitions should be marked
as cached by default.
It is desirable to know which cache pools exists early on (in analysis) so the query
will fail without hitting HDFS/CatalogServer if a non-existent pool is specified. To
support this, a new cache pool catalog object type was introduced. The catalog server
caches the known pools (periodically refreshing the cache) and sends the known pools out
in catalog updates. This allows impalads to perform analysis checks on cache pool
existence going to HDFS. It would be easy to use this to add basic cache pool management
in the future (ADD/DROP/SHOW CACHE POOL).
Waiting for the table/partition to become cached may take a long time. Instead of
blocking the user from access the time during this period we will wait for the cache
requests to complete in the background and once they have finished the table metadata
will be automatically refreshed.
Change-Id: I1de9c6e25b2a3bdc09edebda5510206eda3dd89b
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2310
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
All other CDH components use slf4j version 1.7.5; Impala's use of an earlier version
causes a lot of benign warnings. This patch changes Impala's version to be the same
as the rest of the stack.
Change-Id: I297903d146c6b7642de5b6fa4eefa28a6a08fafe
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2541
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: jenkins
This is needed by upcoming commits so we can #include anyval-util.h in
cross-compiled functions without pulling in all of LLVM in our IR
module (this causes build problems, plus our module will be huger than
necessary).
Change-Id: I756d5a95e5c254403d9ad5684fe27cf96f3aec1e
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2677
Reviewed-by: Nong Li <nong@cloudera.com>
Tested-by: jenkins
(cherry picked from commit ebc328e0225d7e6204d5bc8d0c0eaa2f3c6456cf)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2688
Reviewed-by: Skye Wanderman-Milne <skye@cloudera.com>
This commit is the first step in improving the performance of partition
pruning. Currently, Impala can prune approximately 10K partitions per
sec, thereby introducing significant overhead for huge table with a
large number of partitions. With this commit we reduce that overhead by
3X by batching the partition pruning calls to the backend.
Change-Id: I3303bfc7fb6fe014790f58a5263adeea94d0fe7d
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2608
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2687
This commit fixes issue CDH-19292, where querying an HBase table takes
a significant amount of time if HBase has a large (>1K) number of
tables. The performance bottleneck is the call to HBase to retrieve the
table's metadata (column families) during the computation of row stats.
To resolve this issue we cache the column families in the catalog object
associated with an HBase table, so the expensive call to HBase only
happens the first time the table is queried. Subsequent calls use the
cached data.
Change-Id: I0203fee3e73d2a4304530fe0a1ba2cf163f39350
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2672
Reviewed-by: Dimitris Tsirogiannis <dtsirogiannis@cloudera.com>
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2679
The creation of hive metastore clients is now synchronised to avoid the possibility of
race conditions accessing local Kerberos state. In experiments, this does not fully
resolve the issue but significantly reduces the chances it will occur.
Also adds in a new debug config option to optionally sleep for a specified number of
milliseconds between creating connections. The default is zero, but can be configured
by setting impala.catalog.metastore.cnxn.creation.delay.ms in the hive-site.xml.
Change-Id: I83e863760470bdc2d9b27c6669f35604111d69d7
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2661
Reviewed-by: Marcel Kornacker <marcel@cloudera.com>
Tested-by: jenkins
(cherry picked from commit b0b486028ce46be26967aa202a4b1acea22d9311)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2665
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
These are used as our internal representation of the authorization policy metadata
(as opposed to directly using the Sentry Thrift structs). Versioned/managed in the
same way as other TCatalogObjects.
Change-Id: Ia1ed9bd4e25e9072849edebcae7c2d3a7aed660d
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2545
Reviewed-by: Lenni Kuff <lskuff@cloudera.com>
Tested-by: jenkins
(cherry picked from commit c89431775fcca19cdbeddba635b83fd121d39b04)
Reviewed-on: http://gerrit.ent.cloudera.com:8080/2646