Tim Armstrong fb5dc9eb48 IMPALA-4835: switch I/O buffers to buffer pool
This is the following squashed patches that were reverted.

I will fix the known issues with some follow-on patches.

======================================================================
IMPALA-4835: Part 1: simplify I/O mgr mem mgmt and cancellation

In preparation for switching the I/O mgr to the buffer pool, this
removes and cleans up a lot of code so that the switchover patch starts
from a cleaner slate.

* Remove the free buffer cache (which will be replaced by buffer pool's
  own caching).
* Make memory limit exceeded error checking synchronous (in anticipation
  of having to propagate buffer pool errors synchronously).
* Simplify error propagation - remove the (ineffectual) code that
  enqueued BufferDescriptors containing error statuses.
* Document locking scheme better in a few places, make it part of the
  function signature when it seemed reasonable.
* Move ReturnBuffer() to ScanRange, because it is intrinsically
  connected with the lifecycle of a scan range.
* Separate external ReturnBuffer() and internal CleanUpBuffer()
  interfaces - previously callers of ReturnBuffer() were fudging
  the num_buffers_in_reader accounting to make the external interface work.
* Eliminate redundant state in ScanRange: 'eosr_returned_' and
  'is_cancelled_'.
* Clarify the logic around calling Close() for the last
  BufferDescriptor.
  -> There appeared to be an implicit assumption that buffers would be
     freed in the order they were returned from the scan range, so that
     the "eos" buffer was returned last. Instead just count the number
     of outstanding buffers to detect the last one.
  -> Touching the is_cancelled_ field without holding a lock was hard to
     reason about - violated locking rules and it was unclear that it
     was race-free.
* Remove DiskIoMgr::Read() to simplify the interface. It is trivial to
  inline at the callsites.

This will probably regress performance somewhat because of the cache
removal, so my plan is to merge it around the same time as switching
the I/O mgr to allocate from the buffer pool. I'm keeping the patches
separate to make reviewing easier.

Testing:
* Ran exhaustive tests
* Ran the disk-io-mgr-stress-test overnight

======================================================================
IMPALA-4835: Part 2: Allocate scan range buffers upfront

This change is a step towards reserving memory for buffers from the
buffer pool and constraining per-scanner memory requirements. This
change restructures the DiskIoMgr code so that each ScanRange operates
with a fixed set of buffers that are allocated upfront and recycled as
the I/O mgr works through the ScanRange.

One major change is that ScanRanges get blocked when a buffer is not
available and get unblocked when a client returns a buffer via
ReturnBuffer(). I was able to remove the logic to maintain the
blocked_ranges_ list by instead adding a separate set with all ranges
that are active.

There is also some miscellaneous cleanup included - e.g. reducing the
amount of code devoted to maintaining counters and metrics.

One tricky part of the existing code was the it called
IssueInitialRanges() with empty lists of files and depended on
DiskIoMgr::AddScanRanges() to not check for cancellation in that case.
See IMPALA-6564/IMPALA-6588. I changed the logic to not try to issue
ranges for empty lists of files.

I plan to merge this along with the actual buffer pool switch, but
separated it out to allow review of the DiskIoMgr changes separate from
other aspects of the buffer pool switchover.

Testing:
* Ran core and exhaustive tests.

======================================================================
IMPALA-4835: Part 3: switch I/O buffers to buffer pool

This is the final patch to switch the Disk I/O manager to allocate all
buffer from the buffer pool and to reserve the buffers required for
a query upfront.

* The planner reserves enough memory to run a single scanner per
  scan node.
* The multi-threaded scan node must increase reservation before
  spinning up more threads.
* The scanner implementations must be careful to stay within their
  assigned reservation.

The row-oriented scanners were most straightforward, since they only
have a single scan range active at a time. A single I/O buffer is
sufficient to scan the whole file but more I/O buffers can improve I/O
throughput.

Parquet is more complex because it issues a scan range per column and
the sizes of the columns on disk are not known during planning. To
deal with this, the reservation in the frontend is based on a
heuristic involving the file size and # columns. The Parquet scanner
can then divvy up reservation to columns based on the size of column
data on disk.

I adjusted how the 'mem_limit' is divided between buffer pool and non
buffer pool memory for low mem_limits to account for the increase in
buffer pool memory.

Testing:
* Added more planner tests to cover reservation calcs for scan node.
* Test scanners for all file formats with the reservation denial debug
  action, to test behaviour when the scanners hit reservation limits.
* Updated memory and buffer pool limits for tests.
* Added unit tests for dividing reservation between columns in parquet,
  since the algorithm is non-trivial.

Perf:
I ran TPC-H and targeted perf locally comparing with master. Both
showed small improvements of a few percent and no regressions of
note. Cluster perf tests showed no significant change.

Change-Id: I3ef471dc0746f0ab93b572c34024fc7343161f00
Reviewed-on: http://gerrit.cloudera.org:8080/9679
Reviewed-by: Tim Armstrong <tarmstrong@cloudera.com>
Tested-by: Tim Armstrong <tarmstrong@cloudera.com>
2018-04-28 23:41:39 +00:00
2018-04-25 20:52:42 +00:00
2017-09-18 19:35:28 +00:00
2018-01-12 21:38:38 +00:00
2018-04-25 20:52:42 +00:00

Welcome to Impala

Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.

Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:

  • Best of breed performance and scalability.
  • Support for data stored in HDFS, Apache HBase and Amazon S3.
  • Wide analytic SQL support, including window functions and subqueries.
  • On-the-fly code generation using LLVM to generate CPU-efficient code tailored specifically to each individual query.
  • Support for the most commonly-used Hadoop file formats, including the Apache Parquet project.
  • Apache-licensed, 100% open source.

More about Impala

To learn more about Impala as a business user, or to try Impala live or in a VM, please visit the Impala homepage.

If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.

Supported Platforms

Impala only supports Linux at the moment.

Export Control Notice

This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.

Build Instructions

See bin/bootstrap_build.sh.

Detailed Build Notes

Impala can be built with pre-built components, downloaded from S3, or can be built with an in-place toolchain located in the thirdparty directory (not recommended). The components needed to build Impala are Apache Hadoop, Hive, HBase, and Sentry. If you need to manually override the locations or versions of these components, you can do so through the environment variables and scripts listed below.

Scripts and directories
Location Purpose
bin/impala-config.sh This script must be sourced to setup all environment variables properly to allow other scripts to work
bin/impala-config-local.sh A script can be created in this location to set local overrides for any environment variables
bin/impala-config-branch.sh A version of the above that can be checked into a branch for convenience.
bin/bootstrap_build.sh A helper script to bootstrap some of the build requirements.
bin/bootstrap_development.sh A helper script to bootstrap a developer environment. Please read it before using.
be/build/ Impala build output goes here.
be/generated-sources/ Thrift and other generated source will be found here.
Environment variable Default value Description
IMPALA_HOME Top level Impala directory
IMPALA_TOOLCHAIN "${IMPALA_HOME}/toolchain" Native toolchain directory (for compilers, libraries, etc.)
SKIP_TOOLCHAIN_BOOTSTRAP "false" Skips downloading the toolchain any python dependencies if "true"
CDH_COMPONENTS_HOME "${IMPALA_HOME}/toolchain/cdh_components" OR "${IMPALA_HOME}/thirdparty" (if detected) If a thirdparty directory is present, components found here will override anything in IMPALA_TOOLCHAIN.
CDH_MAJOR_VERSION "5" Identifier used to uniqueify paths for potentially incompatible component builds.
IMPALA_CONFIG_SOURCED "1" Set by ${IMPALA_HOME}/bin/impala-config.sh (internal use)
JAVA_HOME "/usr/lib/jvm/${JAVA_VERSION}" Used to locate Java
JAVA_VERSION "java-7-oracle-amd64" Can override to set a local Java version.
JAVA "${JAVA_HOME}/bin/java" Java binary location.
CLASSPATH See bin/set-classpath.sh for details.
PYTHONPATH Will be changed to include: "${IMPALA_HOME}/shell/gen-py" "${IMPALA_HOME}/testdata" "${THRIFT_HOME}/python/lib/python2.7/site-packages" "${HIVE_HOME}/lib/py" "${IMPALA_HOME}/shell/ext-py/prettytable-0.7.1/dist/prettytable-0.7.1" "${IMPALA_HOME}/shell/ext-py/sasl-0.1.1/dist/sasl-0.1.1-py2.7-linux-x "${IMPALA_HOME}/shell/ext-py/sqlparse-0.1.14/dist/sqlparse-0.1.14-py2
Source Directories for Impala
Environment variable Default value Description
IMPALA_BE_DIR "${IMPALA_HOME}/be" Backend directory. Build output is also stored here.
IMPALA_FE_DIR "${IMPALA_HOME}/fe" Frontend directory
IMPALA_COMMON_DIR "${IMPALA_HOME}/common" Common code (thrift, function registry)
Various Compilation Settings
Environment variable Default value Description
IMPALA_BUILD_THREADS "8" or set to number of processors by default. Used for make -j and distcc -j settings.
IMPALA_MAKE_FLAGS "" Any extra settings to pass to make. Also used when copying udfs / udas into HDFS.
USE_SYSTEM_GCC "0" If set to any other value, directs cmake to not set GCC_ROOT, CMAKE_C_COMPILER, CMAKE_CXX_COMPILER, as well as setting TOOLCHAIN_LINK_FLAGS
IMPALA_CXX_COMPILER "default" Used by cmake (cmake_modules/toolchain and clang_toolchain.cmake) to select gcc / clang
USE_GOLD_LINKER "true" Directs backend cmake to use gold.
IS_OSX "false" (Experimental) currently only used to disable Kudu.
Dependencies
Environment variable Default value Description
HADOOP_HOME "${CDH_COMPONENTS_HOME}/hadoop-${IMPALA_HADOOP_VERSION}/" Used to locate Hadoop
HADOOP_INCLUDE_DIR "${HADOOP_HOME}/include" For 'hdfs.h'
HADOOP_LIB_DIR "${HADOOP_HOME}/lib" For 'libhdfs.a' or 'libhdfs.so'
HIVE_HOME "${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/"
HIVE_SRC_DIR "${HIVE_HOME}/src" Used to find Hive thrift files.
HBASE_HOME "${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/"
SENTRY_HOME "${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/" Used to setup test data
THRIFT_HOME "${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}"
Description
Apache Impala
Readme 257 MiB
Languages
C++ 49.6%
Java 29.9%
Python 14.6%
JavaScript 1.4%
C 1.2%
Other 3.2%