Files
impala/README.md
Vihang Karajgaonkar a89762bc01 IMPALA-8369 : Impala should be able to interoperate with Hive 3.1.0
This change adds a compatibility shim in fe so that Impala can
interoperate with Hive 3.1.0. It moves the existing Metastoreshim class
to a compat-hive-2 directory and adds a new Metastoreshim class under
compat-hive-3 directory. These shim classes implement method which are
different in hive-2 v/s hive-3 and are used by front end code. At the
build time, based on the environment variable
IMPALA_HIVE_MAJOR_VERSION one of the two shims is added to as source
using the fe/pom.xml build plugin.

Additionally, in order to reduce the dependencies footprint of Hive in
the front end code, this patch also introduces a new module called
shaded-deps. This module using shade plugin to include only the source
files from hive-exec which are need by the fe code. For hive-2 build
path, no changes are done with respect to hive dependencies to minimize
the risk of destabilizing the master branch on the default build option
of using Hive-2.

The different set of dependencies are activated using maven profiles.
The activation of each profile is automatic based on the
IMPALA_HIVE_MAJOR_VERSION.

Testing:
1. Code compiles and runs against both HMS-3 and HMS-2
2. Ran full-suite of tests using the private jenkins job against HMS-2
3. Running full-tests against HMS-3 will need more work like supporting
Tez in the mini-cluster (for dataloading) and HMS transaction support
since HMS3 create transactional tables by default. THis will be on-going
effort and test failures on Hive-3 will be fixed in additional
sub-tasks.

Notes:
1. Patch uses a custom build of Hive to be deployed in mini-cluster. This
build has the fixes for HIVE-21596. This hack will be removed when the
patches are available in official CDP Hive builds.
2. Some of the existing tests rely on the fact the UDFs implement the
UDF interface in Hive (UDFLength, UDFHour, UDFYear). These built-in hive
functions have been moved to use GenericUDF interface in Hive 3. Impala
currently only supports UDFExecutor. In order to have a full
compatibility with all the functions in Hive 2.x we should support
GenericUDFs too. That would be taken up as a separate patch.
3. Sentry dependencies bring a lot of transitive hive dependencies. The
patch excludes such dependencies since they create problems while
building against Hive-3. Since these hive-2 dependencies are
already included when building against hive-2 this should not be a problem.

Change-Id: I45a4dadbdfe30a02f722dbd917a49bc182fc6436
Reviewed-on: http://gerrit.cloudera.org:8080/13005
Reviewed-by: Joe McDonnell <joemcdonnell@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2019-05-01 03:27:43 +00:00

6.1 KiB

Welcome to Impala

Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.

Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:

  • Best of breed performance and scalability.
  • Support for data stored in HDFS, Apache HBase and Amazon S3.
  • Wide analytic SQL support, including window functions and subqueries.
  • On-the-fly code generation using LLVM to generate CPU-efficient code tailored specifically to each individual query.
  • Support for the most commonly-used Hadoop file formats, including the Apache Parquet project.
  • Apache-licensed, 100% open source.

More about Impala

To learn more about Impala as a business user, or to try Impala live or in a VM, please visit the Impala homepage.

If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.

Supported Platforms

Impala only supports Linux at the moment.

Export Control Notice

This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.

Build Instructions

See bin/bootstrap_build.sh.

Detailed Build Notes

Impala can be built with pre-built components or components downloaded from S3. The components needed to build Impala are Apache Hadoop, Hive, HBase, and Sentry. If you need to manually override the locations or versions of these components, you can do so through the environment variables and scripts listed below.

Scripts and directories
Location Purpose
bin/impala-config.sh This script must be sourced to setup all environment variables properly to allow other scripts to work
bin/impala-config-local.sh A script can be created in this location to set local overrides for any environment variables
bin/impala-config-branch.sh A version of the above that can be checked into a branch for convenience.
bin/bootstrap_build.sh A helper script to bootstrap some of the build requirements.
bin/bootstrap_development.sh A helper script to bootstrap a developer environment. Please read it before using.
be/build/ Impala build output goes here.
be/generated-sources/ Thrift and other generated source will be found here.
Environment variable Default value Description
IMPALA_HOME Top level Impala directory
IMPALA_TOOLCHAIN "${IMPALA_HOME}/toolchain" Native toolchain directory (for compilers, libraries, etc.)
SKIP_TOOLCHAIN_BOOTSTRAP "false" Skips downloading the toolchain any python dependencies if "true"
CDH_BUILD_NUMBER Identifier to indicate the CDH build number
CDH_COMPONENTS_HOME "${IMPALA_HOME}/toolchain/cdh_components-${CDH_BUILD_NUMBER}" Location of the CDH components within the toolchain.
CDH_MAJOR_VERSION "5" Identifier used to uniqueify paths for potentially incompatible component builds.
IMPALA_CONFIG_SOURCED "1" Set by ${IMPALA_HOME}/bin/impala-config.sh (internal use)
JAVA_HOME "/usr/lib/jvm/${JAVA_VERSION}" Used to locate Java
JAVA_VERSION "java-7-oracle-amd64" Can override to set a local Java version.
JAVA "${JAVA_HOME}/bin/java" Java binary location.
CLASSPATH See bin/set-classpath.sh for details.
PYTHONPATH Will be changed to include: "${IMPALA_HOME}/shell/gen-py" "${IMPALA_HOME}/testdata" "${THRIFT_HOME}/python/lib/python2.7/site-packages" "${HIVE_HOME}/lib/py" "${IMPALA_HOME}/shell/ext-py/prettytable-0.7.1/dist/prettytable-0.7.1" "${IMPALA_HOME}/shell/ext-py/sasl-0.1.1/dist/sasl-0.1.1-py2.7-linux-x "${IMPALA_HOME}/shell/ext-py/sqlparse-0.1.19/dist/sqlparse-0.1.19-py2
Source Directories for Impala
Environment variable Default value Description
IMPALA_BE_DIR "${IMPALA_HOME}/be" Backend directory. Build output is also stored here.
IMPALA_FE_DIR "${IMPALA_HOME}/fe" Frontend directory
IMPALA_COMMON_DIR "${IMPALA_HOME}/common" Common code (thrift, function registry)
Various Compilation Settings
Environment variable Default value Description
IMPALA_BUILD_THREADS "8" or set to number of processors by default. Used for make -j and distcc -j settings.
IMPALA_MAKE_FLAGS "" Any extra settings to pass to make. Also used when copying udfs / udas into HDFS.
USE_SYSTEM_GCC "0" If set to any other value, directs cmake to not set GCC_ROOT, CMAKE_C_COMPILER, CMAKE_CXX_COMPILER, as well as setting TOOLCHAIN_LINK_FLAGS
IMPALA_CXX_COMPILER "default" Used by cmake (cmake_modules/toolchain and clang_toolchain.cmake) to select gcc / clang
USE_GOLD_LINKER "true" Directs backend cmake to use gold.
IS_OSX "false" (Experimental) currently only used to disable Kudu.
Dependencies
Environment variable Default value Description
HADOOP_HOME "${CDH_COMPONENTS_HOME}/hadoop-${IMPALA_HADOOP_VERSION}/" Used to locate Hadoop
HADOOP_INCLUDE_DIR "${HADOOP_HOME}/include" For 'hdfs.h'
HADOOP_LIB_DIR "${HADOOP_HOME}/lib" For 'libhdfs.a' or 'libhdfs.so'
HIVE_HOME "${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/"
HBASE_HOME "${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/"
SENTRY_HOME "${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/" Used to setup test data
THRIFT_HOME "${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}"