Surya Hebbar 36a63d0a33 IMPALA-12182: Add CPU utilization chart for RuntimeProfile's sampled metrics
This change adds support for a stacked area chart for CPU utilization
to the query timeline display, while also providing the ability to scale
timetick values, precision, and the ability to horizontally scale
the fragment timing diagram along with the utilization chart.

Rendering of different components within the diagram has been decoupled
to isolate scaling of timeticks, also to improve the overall
efficiency by making the rendering functions asynchronous
for better performance during resize events. Additionally,
re-rendering of fragment diagram is only triggered during new
fragment events.

The following are the associated key bindings to scale the timeline
with mouse wheel events.
- shift + wheel events on #fragment_diagram
- shift + wheel events on #timeticks_footer
- alt + shift + wheel events on #timeticks_footer for precision control

Note:
Ctrl + mouse wheel events and ctrl + '+'/'-' events can be used to
resize the timeline through the browser.

Mouse wheel events have been associated with respective components
for better efficiency and maintainability.

Constraints have been added to above attributes to limit scaling/zooming
for appropriate display and rendering across all DOM elements.

RESOURCE_TRACE_RATIO query option provides the utilization values to be
traced within the RuntimeProfile. It contains samples of CPU utilization
metrics for user, sys and iowait. These time series counters are available
within the profile having the following names.

Per Node Profiles -
  - HostCpuIoWaitPercentage
  - HostCpuSysPercentage
  - HostCpuUserPercentage

The samples are updated based on 'periodic_counter_update_period_ms'
providing the 'period' within profile's 'Per Node Profiles'.

These are retrieved from the ChunkedTimeSeriesCounter in the
RuntimeProfile. Currently, JSON profiles and webUI summary pages
contain the downsampled values.

Utilization samples are aligned with the fragment diagram by
associating the number of samples and the period.

Aggregate CPU usage for each node is being calculated
after accumulating the basis point values for user, sys
and iowait. These are being displayed after grouping the
associated counters for each node as a stacked line chart.

c3.js charting library based on d3.v5 is being used to plot
the utilization.

The license associated with d3 v5 during the related time frame
has been included along with the charting library's.

Support for experimental profile V2 is currently not included.

Scaling a large number of values to support profile V2 would be
possible with appropriate down-sampling in the back-end.

Testing: Manual testing with TPC-DS and TPC-H queries

Change-Id: Idea2a6db217dbfaa7a0695aeabb6d9c1ecf62158
Reviewed-on: http://gerrit.cloudera.org:8080/20008
Reviewed-by: Riza Suminto <riza.suminto@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
2023-07-11 00:06:24 +00:00
2020-06-15 23:42:12 +00:00

Welcome to Impala

Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.

Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:

  • Best of breed performance and scalability.
  • Support for data stored in HDFS, Apache HBase, Apache Kudu, Amazon S3, Azure Data Lake Storage, Apache Hadoop Ozone and more!
  • Wide analytic SQL support, including window functions and subqueries.
  • On-the-fly code generation using LLVM to generate lightning-fast code tailored specifically to each individual query.
  • Support for the most commonly-used Hadoop file formats, including Apache Parquet and Apache ORC.
  • Support for industry-standard security protocols, including Kerberos, LDAP and TLS.
  • Apache-licensed, 100% open source.

More about Impala

The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.

To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.

If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.

Supported Platforms

Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.

Supported OS Distributions

Impala runs on Linux systems only. The supported distros are

  • Ubuntu 16.04/18.04
  • CentOS/RHEL 7/8

Other systems, e.g. SLES12, may also be supported but are not tested by the community.

Export Control Notice

This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.

Build Instructions

See Impala's developer documentation to get started.

Detailed build notes has some detailed information on the project layout and build.

Description
Apache Impala
Readme 288 MiB
Languages
C++ 49.6%
Java 29.9%
Python 14.6%
JavaScript 1.4%
C 1.2%
Other 3.2%