resolvePathWithMasking() is a wrapper on resolvePath() to further
resolve nested columns inside the table masking view. When it was
added, complex types in the select list hadn't been supported yet. So
the table masking view can't expose complex type columns directly in the
select list. Any paths in nested types will be further resolved inside
the table masking view in resolvePathWithMasking().
Take the following query as an example:
select id, nested_struct.* from complextypestbl;
If Ranger column-masking/row-filter policies applied on the table, the
query is rewritten as
select id, nested_struct.* from (
select mask(id) from complextypestbl
where row-filtering-condition
) t;
Table masking view "t" can't expose the nested column "nested_struct".
So we further resolve "nested_struct" inside the inlineView to use the
masked table "complextypestbl". The underlying TableRef is expected to
be a BaseTableRef.
Paths that don't reference nested columns should be resolved and
returned directly (just like the original resolvePath() does). E.g.
select v.* from masked_view v
is rewritten to
select v.* from (
select mask(c1), mask(c2), ..., mask(cn)
from masked_view
where row-filtering-condition
) v;
The STAR path "v.*" should be resolved directly. However, it's treated
as a nested column unexpectedly. The code then tries to resolve it
inside the table "masked_view" and found "masked_view" is not a table so
throws the IllegalStateException.
These are the current conditions for identifying nested STAR paths:
- The destType is STRUCT
- And the resolved path is rooted at a valid tuple descriptor
They don't really recognize the nested struct columns because STAR paths
on table/view also match these conditions. When the STAR path is an
expansion on a catalog table/view, the root tuple descriptor is
exactly the output tuple of the table/view. The destType is the type of
the tuple descriptor which is always a StructType.
Note that STAR paths on other nested types, i.e. array/map, are invalid.
So the first condition matches for all valid cases. The second condition
also matches all valid cases since both the table/view and struct STAR
expansion have the path rooted at a valid tuple descriptor.
This patch fixes the check for nested struct STAR path by checking
the matched types instead. Note that if "v.*" is a table/view expansion,
the matched type list is empty. If "v.*" is a struct column expansion,
the matched type list contains the STRUCT column type.
Tests:
- Add missing coverage on STAR paths (v.*) on masked views.
Change-Id: I8f1e78e325baafbe23101909d47e82bf140a2d77
Reviewed-on: http://gerrit.cloudera.org:8080/19429
Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
Welcome to Impala
Lightning-fast, distributed SQL queries for petabytes of data stored in Apache Hadoop clusters.
Impala is a modern, massively-distributed, massively-parallel, C++ query engine that lets you analyze, transform and combine data from a variety of data sources:
- Best of breed performance and scalability.
- Support for data stored in HDFS, Apache HBase, Apache Kudu, Amazon S3, Azure Data Lake Storage, Apache Hadoop Ozone and more!
- Wide analytic SQL support, including window functions and subqueries.
- On-the-fly code generation using LLVM to generate lightning-fast code tailored specifically to each individual query.
- Support for the most commonly-used Hadoop file formats, including Apache Parquet and Apache ORC.
- Support for industry-standard security protocols, including Kerberos, LDAP and TLS.
- Apache-licensed, 100% open source.
More about Impala
The fastest way to try out Impala is a quickstart Docker container. You can try out running queries and processing data sets in Impala on a single machine without installing dependencies. It can automatically load test data sets into Apache Kudu and Apache Parquet formats and you can start playing around with Apache Impala SQL within minutes.
To learn more about Impala as a user or administrator, or to try Impala, please visit the Impala homepage. Detailed documentation for administrators and users is available at Apache Impala documentation.
If you are interested in contributing to Impala as a developer, or learning more about Impala's internals and architecture, visit the Impala wiki.
Supported Platforms
Impala only supports Linux at the moment. Impala supports x86_64 and has experimental support for arm64 (as of Impala 4.0). Impala Requirements contains more detailed information on the minimum CPU requirements.
Supported OS Distributions
Impala runs on Linux systems only. The supported distros are
- Ubuntu 16.04/18.04
- CentOS/RHEL 7/8
Other systems, e.g. SLES12, may also be supported but are not tested by the community.
Export Control Notice
This distribution uses cryptographic software and may be subject to export controls. Please refer to EXPORT_CONTROL.md for more information.
Build Instructions
See Impala's developer documentation to get started.
Detailed build notes has some detailed information on the project layout and build.