mirror of
https://github.com/apache/impala.git
synced 2025-12-19 09:58:28 -05:00
This patch fixes a typo in impala_admission_config.xml so the document could be correctly produced. Testing: - Manually verified that the document impala.pdf could be produced under the folder "docs/build" after we executed "make" under the folder "docs". Change-Id: I79a6a1a4917b09c4c3dc60a3e1c8d37bc8066f1c Reviewed-on: http://gerrit.cloudera.org:8080/22539 Reviewed-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com> Tested-by: Impala Public Jenkins <impala-public-jenkins@cloudera.com>
794 lines
28 KiB
XML
794 lines
28 KiB
XML
<?xml version="1.0" encoding="UTF-8"?>
|
||
<!--
|
||
Licensed to the Apache Software Foundation (ASF) under one
|
||
or more contributor license agreements. See the NOTICE file
|
||
distributed with this work for additional information
|
||
regarding copyright ownership. The ASF licenses this file
|
||
to you under the Apache License, Version 2.0 (the
|
||
"License"); you may not use this file except in compliance
|
||
with the License. You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing,
|
||
software distributed under the License is distributed on an
|
||
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
||
KIND, either express or implied. See the License for the
|
||
specific language governing permissions and limitations
|
||
under the License.
|
||
-->
|
||
<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
|
||
<concept rev="1.3.0" id="admission_control">
|
||
|
||
<title>Configuring Admission Control</title>
|
||
|
||
<prolog>
|
||
<metadata>
|
||
<data name="Category" value="Impala"/>
|
||
<data name="Category" value="Querying"/>
|
||
<data name="Category" value="Admission Control"/>
|
||
<data name="Category" value="Resource Management"/>
|
||
</metadata>
|
||
</prolog>
|
||
|
||
<conbody>
|
||
|
||
<p>
|
||
Impala includes features that balance and maximize resources in your
|
||
<keyword keyref="hadoop_distro"/> cluster. This topic describes how you can improve
|
||
efficiency of your a <keyword keyref="hadoop_distro"/> cluster using those features.
|
||
</p>
|
||
|
||
<p>
|
||
The configuration options for admission control range from the simple (a single resource
|
||
pool with a single set of options) to the complex (multiple resource pools with different
|
||
options, each pool handling queries for a different set of users and groups).
|
||
</p>
|
||
|
||
</conbody>
|
||
|
||
<concept id="concept_bz4_vxz_jgb">
|
||
|
||
<title>Configuring Admission Control in Command Line Interface</title>
|
||
|
||
<conbody>
|
||
|
||
<p>
|
||
To configure admission control, use a combination of startup options for the Impala
|
||
daemon and edit or create the configuration files
|
||
<filepath>fair-scheduler.xml</filepath> and <filepath>llama-site.xml</filepath>.
|
||
</p>
|
||
|
||
<p>
|
||
For a straightforward configuration using a single resource pool named
|
||
<codeph>default</codeph>, you can specify configuration options on the command line and
|
||
skip the <filepath>fair-scheduler.xml</filepath> and <filepath>llama-site.xml</filepath>
|
||
configuration files.
|
||
</p>
|
||
|
||
<p>
|
||
For an advanced configuration with multiple resource pools using different settings:
|
||
<ol>
|
||
<li>
|
||
Set up the <filepath>fair-scheduler.xml</filepath> and
|
||
<filepath>llama-site.xml</filepath> configuration files manually.
|
||
</li>
|
||
|
||
<li>
|
||
Provide the paths to each one using the <cmdname>impalad</cmdname> command-line
|
||
options, <codeph>‑‑fair_scheduler_allocation_path</codeph> and
|
||
<codeph>‑‑llama_site_path</codeph> respectively.
|
||
</li>
|
||
</ol>
|
||
</p>
|
||
|
||
<p>
|
||
The Impala admission control feature uses the Fair Scheduler configuration settings to
|
||
determine how to map users and groups to different resource pools. For example, you
|
||
might set up different resource pools with separate memory limits, and maximum number of
|
||
concurrent and queued queries, for different categories of users within your
|
||
organization. For details about all the Fair Scheduler configuration settings, see the
|
||
<xref
|
||
href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
|
||
scope="external" format="html">Apache
|
||
wiki</xref>.
|
||
</p>
|
||
|
||
<p>
|
||
The Impala admission control feature uses a small subset of possible settings from the
|
||
<filepath>llama-site.xml</filepath> configuration file:
|
||
</p>
|
||
|
||
<codeblock>llama.am.throttling.maximum.placed.reservations.<varname>queue_name</varname>
|
||
llama.am.throttling.maximum.queued.reservations.<varname>queue_name</varname>
|
||
<ph rev="2.5.0 IMPALA-2538">impala.admission-control.pool-default-query-options.<varname>queue_name</varname>
|
||
impala.admission-control.pool-queue-timeout-ms.<varname>queue_name</varname></ph>
|
||
</codeblock>
|
||
|
||
<p>
|
||
The <codeph>llama.am.throttling.maximum.placed.reservations.<varname>queue_name</varname></codeph>
|
||
setting specifies the number of queries that are allowed to run concurrently in this pool.
|
||
</p>
|
||
<p>
|
||
The <codeph>llama.am.throttling.maximum.queued.reservations.<varname>queue_name</varname></codeph>
|
||
setting specifies the number of queries that are allowed to be queued in this pool.
|
||
</p>
|
||
|
||
<p rev="2.5.0 IMPALA-2538">
|
||
The <codeph>impala.admission-control.pool-queue-timeout-ms</codeph> setting specifies
|
||
the timeout value for this pool in milliseconds.
|
||
</p>
|
||
|
||
<p rev="2.5.0 IMPALA-2538"
|
||
>
|
||
The<codeph>impala.admission-control.pool-default-query-options</codeph> settings
|
||
designates the default query options for all queries that run in this pool. Its argument
|
||
value is a comma-delimited string of 'key=value' pairs, <codeph>'key1=val1,key2=val2,
|
||
...'</codeph>. For example, this is where you might set a default memory limit for all
|
||
queries in the pool, using an argument such as <codeph>MEM_LIMIT=5G</codeph>.
|
||
</p>
|
||
|
||
<p rev="2.5.0 IMPALA-2538">
|
||
The <codeph>impala.admission-control.*</codeph> configuration settings are available in
|
||
<keyword keyref="impala25_full"/> and higher.
|
||
</p>
|
||
|
||
</conbody>
|
||
|
||
<concept id="concept_cz4_vxz_jgb">
|
||
|
||
<title>Example of Admission Control Configuration</title>
|
||
|
||
<conbody>
|
||
|
||
<p>
|
||
Here are sample <filepath>fair-scheduler.xml</filepath> and
|
||
<filepath>llama-site.xml</filepath> files that define resource pools
|
||
<codeph>root.default</codeph>, <codeph>root.development</codeph>,
|
||
<codeph>root.production</codeph>, and <codeph>root.coords</codeph>.
|
||
These files define resource pools for Impala admission control and are separate
|
||
from the similar <codeph>fair-scheduler.xml</codeph>that defines resource pools for YARN.
|
||
</p>
|
||
|
||
<p>
|
||
<b>fair-scheduler.xml:</b>
|
||
</p>
|
||
|
||
<p>
|
||
Although Impala does not use the <codeph>vcores</codeph> value, you must still specify
|
||
it to satisfy YARN requirements for the file contents.
|
||
</p>
|
||
|
||
<p>
|
||
Each <codeph><aclSubmitApps></codeph> tag (other than the one for
|
||
<codeph>root</codeph>) contains a comma-separated list of users, then a space, then a
|
||
comma-separated list of groups; these are the users and groups allowed to submit
|
||
Impala statements to the corresponding resource pool.
|
||
</p>
|
||
|
||
<p>
|
||
If you leave the <codeph><aclSubmitApps></codeph> element empty for a pool,
|
||
nobody can submit directly to that pool; child pools can specify their own
|
||
<codeph><aclSubmitApps></codeph> values to authorize users and groups to submit
|
||
to those pools.
|
||
</p>
|
||
|
||
<p>
|
||
The <codeph><onlyCoordinators></codeph> element is a boolean that defaults
|
||
to <codeph>false</codeph>. If this value is set to <codeph>true</codeph>, the
|
||
named request pool will contain only the coordinators and none of the executors.
|
||
The main purpose of this setting is to enable running queries against the
|
||
<codeph>sys.impala_query_live</codeph> table from workload management. Since the
|
||
data for this table is stored in the memory of the coordinators, the executors
|
||
do not need to be involved if the query only selects from this table.
|
||
</p>
|
||
|
||
<p>
|
||
To use an <codeph><onlyCoordinators></codeph> request pool, set the
|
||
<codeph>REQUEST_POOL</codeph> query option to the name of the
|
||
<codeph><onlyCoordinators></codeph> request pool. <b>Caution</b> even though
|
||
these request pools do not contain executors, they can still run any query.
|
||
Thus, while the <codeph>REQUEST_POOL</codeph> query option is set to an only
|
||
coordinators request pool, queries have the potential to run the coordinators
|
||
out of resources.
|
||
</p>
|
||
|
||
<p>
|
||
Caution: care must be taken when naming the <codeph><onlyCoordinators></codeph>
|
||
request pool. If the name has the same prefix as a named executor group set, then
|
||
queries may be automatically routed to the request pool. For example, if the
|
||
coordinator is configured with
|
||
<codeph>--expected_executor_group_sets=prefix1:10</codeph>, then an only coordinators
|
||
request pool named <codeph>prefix1-onlycoords</codeph> will potentially have
|
||
queries routed to it.
|
||
</p>
|
||
|
||
<codeblock><allocations>
|
||
|
||
<queue name="root">
|
||
<aclSubmitApps> </aclSubmitApps>
|
||
<queue name="default">
|
||
<maxResources>50000 mb, 0 vcores</maxResources>
|
||
<aclSubmitApps>*</aclSubmitApps>
|
||
</queue>
|
||
<queue name="development">
|
||
<maxResources>200000 mb, 0 vcores</maxResources>
|
||
<aclSubmitApps>user1,user2 dev,ops,admin</aclSubmitApps>
|
||
</queue>
|
||
<queue name="production">
|
||
<maxResources>1000000 mb, 0 vcores</maxResources>
|
||
<aclSubmitApps> ops,admin</aclSubmitApps>
|
||
</queue>
|
||
<queue name="coords">
|
||
<maxResources>1000000 mb, 0 vcores</maxResources>
|
||
<aclSubmitApps>ops,admin</aclSubmitApps>
|
||
<onlyCoordinators>true</onlyCoordinators>
|
||
</queue>
|
||
</queue>
|
||
<queuePlacementPolicy>
|
||
<rule name="specified" create="false"/>
|
||
<rule name="default" />
|
||
</queuePlacementPolicy>
|
||
</allocations>
|
||
|
||
</codeblock>
|
||
|
||
<p>
|
||
<b>llama-site.xml:</b>
|
||
</p>
|
||
|
||
<codeblock rev="2.5.0 IMPALA-2538">
|
||
<?xml version="1.0" encoding="UTF-8"?>
|
||
<configuration>
|
||
<property>
|
||
<name>llama.am.throttling.maximum.placed.reservations.root.default</name>
|
||
<value>10</value>
|
||
</property>
|
||
<property>
|
||
<name>llama.am.throttling.maximum.queued.reservations.root.default</name>
|
||
<value>50</value>
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.pool-default-query-options.root.default</name>
|
||
<value>mem_limit=128m,query_timeout_s=20,max_io_buffers=10</value>
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>pool-queue-timeout-ms</b>.root.default</name>
|
||
<value>30000</value>
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>max-query-mem-limit</b>.root.default.regularPool</name>
|
||
<value>1610612736</value><!--1.5GB-->
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>min-query-mem-limit</b>.root.default.regularPool</name>
|
||
<value>52428800</value><!--50MB-->
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>clamp-mem-limit-query-option</b>.root.default.regularPool</name>
|
||
<value>true</value>
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>max-query-cpu-core-per-node-limit</b>.root.default.regularPool</name>
|
||
<value>8</value>
|
||
</property>
|
||
<property>
|
||
<name>impala.admission-control.<b>max-query-cpu-core-coordinator-limit</b>.root.default.regularPool</name>
|
||
<value>8</value>
|
||
</property>
|
||
</configuration>
|
||
</codeblock>
|
||
|
||
</conbody>
|
||
|
||
</concept>
|
||
|
||
<concept rev="4.5.0" id="user_quotas">
|
||
|
||
<title>Configuring User Quotas in the Admission Control Configuration</title>
|
||
|
||
<conbody>
|
||
|
||
<p>
|
||
Since <keyword keyref="impala45"/>, User Quotas can be used to define rules
|
||
that limit the total count of queries that a user can run concurrently.
|
||
The rules are based on either usernames or group membership.
|
||
The username is the short username, and groups are
|
||
<xref
|
||
href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/GroupsMapping.html"
|
||
scope="external" format="html">Hadoop Groups
|
||
</xref>.
|
||
</p>
|
||
<p>
|
||
Queries are counted from when they are queued to when they are released.
|
||
In a cluster without an Admission Daemon, the count of queries is synchronized
|
||
across all coordinators through the Statestore.
|
||
In this configuration over-admission is possible.
|
||
</p>
|
||
<p>
|
||
If a query fails to pass the User Quota rules then the query will be
|
||
rejected when it is submitted.
|
||
</p>
|
||
|
||
<p>
|
||
There are three type of rules:
|
||
<ul>
|
||
<li>
|
||
User Rules (based on a username)
|
||
</li>
|
||
<li>
|
||
Wildcard Rules (which match any username)
|
||
</li>
|
||
<li>
|
||
Group Rules (based on a user’s membership of a group)
|
||
</li>
|
||
</ul>
|
||
The rules can be placed at the Pool level or the Root level:
|
||
<ul>
|
||
<li>
|
||
A rule at the Pool level is evaluated against the number of queries a user has running in the pool
|
||
</li>
|
||
<li>
|
||
A rule at the Root level is evaluated against the total number of queries a user has running across all pools.
|
||
</li>
|
||
</ul>
|
||
</p>
|
||
|
||
<p>
|
||
The XML tags used for User Quotas are:
|
||
<ul>
|
||
<li>
|
||
<b>userQueryLimit</b> - used to define a User Rule
|
||
</li>
|
||
<li>
|
||
<b>groupQueryLimit</b> - used to define a Group Rule
|
||
</li>
|
||
<li>
|
||
<b>totalCount</b> - used to define the number of queries that can run
|
||
concurrently.
|
||
</li>
|
||
<li>
|
||
<b>user</b> - used to specify a username to define a User Rule,
|
||
or, by using the wildcard '*', to define a Wildcard Rule.
|
||
</li>
|
||
<li>
|
||
<b>group</b> - in a Group rule, used to specify a group name that the
|
||
rule applies to.
|
||
</li>
|
||
</ul>
|
||
Examples are provided below.
|
||
</p>
|
||
|
||
<p>
|
||
More specific rules override more general rules.
|
||
A user rule overrides a group rule or wildcard rule.
|
||
A group rule overrides a wildcard rule.
|
||
Pool level rules are evaluated first.
|
||
If Pool level rules pass, then Root level rules are evaluated.
|
||
</p>
|
||
|
||
<p>
|
||
Note that the examples of User Quota rules below are incomplete xml snippets
|
||
that would form part of an Admission Control configuration.
|
||
In particular, they omit the essential <b>aclSubmitApps</b> tag.
|
||
</p>
|
||
|
||
<section>
|
||
<title>A User Rule</title>
|
||
<codeblock>
|
||
<userQueryLimit>
|
||
<user>alice</user>
|
||
<user>bob</user>
|
||
<totalCount>3</totalCount>
|
||
</userQueryLimit>
|
||
</codeblock>
|
||
<p>
|
||
This rule limits the users ‘alice’ and ‘bob’ to each running 3 queries.
|
||
This rule could be placed at the Pool level or the Root level.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>A Pool Level Wildcard Rule</title>
|
||
<codeblock>
|
||
<queue name="root">
|
||
<queue name="queueE">
|
||
<userQueryLimit>
|
||
<user>*</user>
|
||
<totalCount>2</totalCount>
|
||
</userQueryLimit>
|
||
</queue>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
This is a wildcard rule on pool root.queueE that limits
|
||
all users to running 2 queries in that pool.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>A Root Level User Rule</title>
|
||
<codeblock>
|
||
<queue name="root">
|
||
<userQueryLimit>
|
||
<user>userD</user>
|
||
<totalCount>2</totalCount>
|
||
</userQueryLimit>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
This is a Root level user rule that limits userD to 2 queries across all
|
||
pools.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>A Group Rule</title>
|
||
<codeblock>
|
||
<queue name="root">
|
||
<queue name="queueE">
|
||
<groupQueryLimit>
|
||
<group>group1</group>
|
||
<group>group2</group>
|
||
<totalCount>2</totalCount>
|
||
</groupQueryLimit>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
This rule limits any user in groups group1 or group2 to running 2 queries
|
||
in pool root.queueE. Note this rule could also be placed at the root level.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>
|
||
More Specific Rules Override The Less Specific
|
||
</title>
|
||
<codeblock>
|
||
<queue name="group-set-small">
|
||
<!-- A user not matched by any other rule can run 1 query in the small pool -->
|
||
<userQueryLimit>
|
||
<user>*</user>
|
||
<totalCount>1</totalCount>
|
||
</userQueryLimit>
|
||
<!-- The user 'alice' can run 4 queries in the small pool -->
|
||
<userQueryLimit>
|
||
<user>alice</user>
|
||
<totalCount>4</totalCount>
|
||
</userQueryLimit>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
With this rule, the user ‘alice’ can run 4 queries in "root.group-set-small".
|
||
The more specific User rule overrides the wildcard rule.
|
||
A user that is not ‘alice’ would be able to run 1 query, per the wildcard
|
||
rule.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>
|
||
Another Example of More Specific Rules Overriding Less Specific Rules
|
||
</title>
|
||
<codeblock>
|
||
<queue name="group-set-small">
|
||
<!-- A user not matched by any other rule can run 1 query in the small pool -->
|
||
<userQueryLimit>
|
||
<user>*</user>
|
||
<totalCount>1</totalCount>
|
||
</userQueryLimit>
|
||
<!-- Members of the group 'it' can run 2 queries in the small pool -->
|
||
<groupQueryLimit>
|
||
<group>it</group>
|
||
<totalCount>2</totalCount>
|
||
</groupQueryLimit>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
Assuming the user ‘bob’ is in the ‘it’ group, they can run 2 queries
|
||
in "root.group-set-small".
|
||
The more specific Group rule overrides the wildcard rule.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>
|
||
User rules are more specific than group rules.
|
||
</title>
|
||
<codeblock>
|
||
<queue name="group-set-small">
|
||
<!-- Members of the group 'it' can run 2 queries in the small pool -->
|
||
<groupQueryLimit>
|
||
<group>it</group>
|
||
<totalCount>2</totalCount>
|
||
</groupQueryLimit>
|
||
<!-- The user 'fiona' can run 3 queries in the small pool -->
|
||
<userQueryLimit>
|
||
<user>fiona</user>
|
||
<totalCount>3</totalCount>
|
||
</userQueryLimit>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
The user 'fiona' can run 4 queries in "root.group-set-small", despite also
|
||
matching the 'it' group.
|
||
The more specific user rule overrides the group rule.
|
||
</p>
|
||
</section>
|
||
|
||
<section>
|
||
<title>
|
||
Both pool and root rules must be passed.
|
||
</title>
|
||
<codeblock>
|
||
<queue name="root">
|
||
<groupQueryLimit>
|
||
<group>support</group>
|
||
<totalCount>6</totalCount>
|
||
</groupQueryLimit>
|
||
<queue name="group-set-small">
|
||
<groupQueryLimit>
|
||
<group>dev</group>
|
||
<group>support</group>
|
||
<totalCount>5</totalCount>
|
||
</groupQueryLimit>
|
||
</queue>
|
||
...
|
||
</codeblock>
|
||
<p>
|
||
Members of group 'support' are limited to run 6 queries across all the pools.
|
||
Members of groups 'dev' and 'support' can run 5 queries
|
||
in "root.group-set-small".
|
||
To run, both rules must be passed.
|
||
The Pool level rule is evaluated first, so that an error mentioning that pool
|
||
will be seen in the case where both rules would fail.
|
||
</p>
|
||
<p>
|
||
A user may be a member of more than one group, which means that more than one
|
||
Group Rule may apply to a user at the Pool or Root level.
|
||
In this case, at each level, the least restrictive rule is applied.
|
||
For example, if the user is in 2 groups: 'it' and 'support', where
|
||
group 'it' restricts her to running 2 queries, and group 'support' restricts
|
||
her to running 5 queries, then the 'support' rule is the least restrictive,
|
||
so she can run 5 queries in the pool.
|
||
</p>
|
||
</section>
|
||
|
||
</conbody>
|
||
|
||
</concept>
|
||
|
||
</concept>
|
||
|
||
|
||
<concept id="concept_zy4_vxz_jgb">
|
||
|
||
<title>Configuring Cluster-wide Admission Control</title>
|
||
|
||
<prolog>
|
||
<metadata>
|
||
<data name="Category" value="Configuring"/>
|
||
</metadata>
|
||
</prolog>
|
||
|
||
<conbody>
|
||
|
||
<note type="important">
|
||
These settings only apply if you enable admission control but leave dynamic resource
|
||
pools disabled. In <keyword
|
||
keyref="impala25_full"/> and higher, we recommend
|
||
that you set up dynamic resource pools and customize the settings for each pool as
|
||
described in <xref href="#concept_bz4_vxz_jgb" format="dita"/>.
|
||
</note>
|
||
|
||
<p>
|
||
The following Impala configuration options let you adjust the settings of the admission
|
||
control feature. When supplying the options on the <cmdname>impalad</cmdname> command
|
||
line, prepend the option name with <codeph>--</codeph>.
|
||
</p>
|
||
|
||
<dl>
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>queue_wait_timeout_ms</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Maximum amount of time (in milliseconds) that a request waits to be
|
||
admitted before timing out.
|
||
<p>
|
||
<b>Type:</b> <codeph>int64</codeph>
|
||
</p>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>60000</codeph>
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>default_pool_max_requests</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Maximum number of concurrent outstanding requests allowed to run
|
||
before incoming requests are queued. Because this limit applies cluster-wide, but
|
||
each Impala node makes independent decisions to run queries immediately or queue
|
||
them, it is a soft limit; the overall number of concurrent queries might be slightly
|
||
higher during times of heavy load. A negative value indicates no limit. Ignored if
|
||
<codeph>fair_scheduler_config_path</codeph> and <codeph>llama_site_path</codeph> are
|
||
set.
|
||
<p>
|
||
<b>Type:</b> <codeph>int64</codeph>
|
||
</p>
|
||
|
||
<p>
|
||
<b>Default:</b> <ph rev="2.5.0">-1, meaning unlimited (prior to
|
||
<keyword
|
||
keyref="impala25_full"/> the default was 200)</ph>
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>default_pool_max_queued</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Maximum number of requests allowed to be queued before rejecting
|
||
requests. Because this limit applies cluster-wide, but each Impala node makes
|
||
independent decisions to run queries immediately or queue them, it is a soft limit;
|
||
the overall number of queued queries might be slightly higher during times of heavy
|
||
load. A negative value or 0 indicates requests are always rejected once the maximum
|
||
concurrent requests are executing. Ignored if
|
||
<codeph>fair_scheduler_config_path</codeph> and <codeph>llama_site_path</codeph> are
|
||
set.
|
||
<p>
|
||
<b>Type:</b> <codeph>int64</codeph>
|
||
</p>
|
||
|
||
<p>
|
||
<b>Default:</b> <ph rev="2.5.0">unlimited</ph>
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>default_pool_mem_limit</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Maximum amount of memory (across the entire cluster) that all
|
||
outstanding requests in this pool can use before new requests to this pool are
|
||
queued. Specified in bytes, megabytes, or gigabytes by a number followed by the
|
||
suffix <codeph>b</codeph> (optional), <codeph>m</codeph>, or <codeph>g</codeph>,
|
||
either uppercase or lowercase. You can specify floating-point values for megabytes
|
||
and gigabytes, to represent fractional numbers such as <codeph>1.5</codeph>. You can
|
||
also specify it as a percentage of the physical memory by specifying the suffix
|
||
<codeph>%</codeph>. 0 or no setting indicates no limit. Defaults to bytes if no unit
|
||
is given. Because this limit applies cluster-wide, but each Impala node makes
|
||
independent decisions to run queries immediately or queue them, it is a soft limit;
|
||
the overall memory used by concurrent queries might be slightly higher during times
|
||
of heavy load. Ignored if <codeph>fair_scheduler_config_path</codeph> and
|
||
<codeph>llama_site_path</codeph> are set.
|
||
<note
|
||
conref="../shared/impala_common.xml#common/admission_compute_stats"/>
|
||
|
||
<p conref="../shared/impala_common.xml#common/type_string"/>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>""</codeph> (empty string, meaning unlimited)
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>disable_pool_max_requests</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Disables all per-pool limits on the maximum number of running
|
||
requests.
|
||
<p>
|
||
<b>Type:</b> Boolean
|
||
</p>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>false</codeph>
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>disable_pool_mem_limits</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Disables all per-pool mem limits.
|
||
<p>
|
||
<b>Type:</b> Boolean
|
||
</p>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>false</codeph>
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>fair_scheduler_allocation_path</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Path to the fair scheduler allocation file
|
||
(<codeph>fair-scheduler.xml</codeph>).
|
||
<p
|
||
conref="../shared/impala_common.xml#common/type_string"/>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>""</codeph> (empty string)
|
||
</p>
|
||
|
||
<p>
|
||
<b>Usage notes:</b> Admission control only uses a small subset of the settings
|
||
that can go in this file, as described below. For details about all the Fair
|
||
Scheduler configuration settings, see the
|
||
<xref
|
||
href="https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/FairScheduler.html#Configuration"
|
||
scope="external" format="html">Apache
|
||
wiki</xref>.
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
|
||
<dlentry>
|
||
|
||
<dt>
|
||
<codeph>llama_site_path</codeph>
|
||
</dt>
|
||
|
||
<dd>
|
||
<b>Purpose:</b> Path to the configuration file used by admission control
|
||
(<codeph>llama-site.xml</codeph>). If set,
|
||
<codeph>fair_scheduler_allocation_path</codeph> must also be set.
|
||
<p conref="../shared/impala_common.xml#common/type_string"/>
|
||
|
||
<p>
|
||
<b>Default:</b> <codeph>""</codeph> (empty string)
|
||
</p>
|
||
|
||
<p>
|
||
<b>Usage notes:</b> Admission control only uses a few of the settings that can go
|
||
in this file, as described below.
|
||
</p>
|
||
</dd>
|
||
|
||
</dlentry>
|
||
</dl>
|
||
|
||
</conbody>
|
||
|
||
</concept>
|
||
|
||
</concept>
|