This patch introduces new abstractions and changes the way queries are run via the
workload runner. A new class 'Workload' is introduced, which represents the notion of a
workload in the performance framework (i.e, A set of query names mapped to query
strings).
The new workflow is:
- run-workload acts as a driver. It accepts user parmaters for which queries to
run and their execution strategy. It generates workload objects and passes them to the
workload-runner.
- The workload runner takes a workload, its execution parameters and generates a set of
test vectors over which the workload is run iteratively.
- A workload is executed by initialiazing a QueryExecutor for each query being run in a
test vector. The workload executor is then responsible for execution and gathering
results.
- The execution details of every query being executed are are stored and returned to the
driver (run-workload).
Change-Id: Ia16360140d65e6733e534e823bc5d5614622ab5f
Reviewed-on: http://gerrit.ent.cloudera.com:8080/3616
Reviewed-by: Taras Bobrovytsky <tbobrovytsky@cloudera.com>
Tested-by: jenkins
This patch makes the workload runner's logging concise and more informative. Specifically,
it
- logs the time taken for each iteration of a query.
- changes the default log level to INFO.
- The output is less verbose.
Change-Id: I5f964cf76269fd64ce127b9e4c51fe1deafd1d1b
Reviewed-on: http://gerrit.ent.cloudera.com:8080/1076
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Ishaan Joshi <ishaan@cloudera.com>
At the moment, a query is the default unit of execution and parallelism in the Impala
performance suite. With this change, we now have the ability to treat a workload as the
unit of execution. A workload is defined as a unique combination of the dataset, scale
factor, a subset (or all) of the queries in the dataset, and a table format (file format,
compression codec and compression scheme).
It introduces two new command line options in bin/run-workload.py:
* --execution_scope
The default scope is 'query', and it maintains previous semantics. The
new scope is 'workload', which toggles the unit of execution to a workload.
* --shuffle_query_exec_order.
Shuffles the order in which queries are executed (only applicable when the
execution_scope if workload), defaults to False.
Change-Id: I790d75f0896210cda8eb999015b0be04246e4c45
Reviewed-on: http://gerrit.ent.cloudera.com:8080/503
Reviewed-by: Ishaan Joshi <ishaan@cloudera.com>
Tested-by: Ishaan Joshi <ishaan@cloudera.com>
This is the first set of changes required to start getting our functional test
infrastructure moved from JUnit to Python. After investigating a number of
option, I decided to go with a python test executor named py.test
(http://pytest.org/). It is very flexible, open source (MIT licensed), and will
enable us to do some cool things like parallel test execution.
As part of this change, we now use our "test vectors" for query test execution.
This will be very nice because it means if load the "core" dataset you know you
will be able to run the "core" query tests (specified by --exploration_strategy
when running the tests).
You will see that now each combination of table format + query exec options is
treated like an individual test case. this will make it much easier to debug
exactly where something failed.
These new tests can be run using the script at tests/run-tests.sh
This adds initial changes for the Impala failure testing library. It also refactors
run workload into its own module to it can be used in other tests.
The failure testing has two main components - the first is an object model on top on top
of Impala services in a cluster. This allows for enumerating the serivces in the cluster
and executing commands on remote machines. This initial cut is built on top of the
CM service to help with starting/stopping services. The long term goal is to let this run
on both a CM cluster and non-CM cluster as well as locally.
The other part of the failure injection change is failure_inctor module that uses the
Impala service abstraction to select and inject failures into random impala services.
This failure testing framework hasn't been completely validated because the product code
is not yet ready, but it is important to get this checked in so all new changes to
run-workload are based off this refactor.
Change-Id: I73bf44f0ac881ec17bea7cb05d850b45e2ea5be5
Queries now return rows on both our small (query test) data set as well as the 10TB
data set. This change also fixes a problem with python not being set properly and
adds support for reporting query results using the geometric mean
Change-Id: Ia432148d96645ecda3f63900b3bfbd29c706d886
This changes cleans up run-workload to push more query execution logic into query_executor.
It also adds a new feature to run-workload to support filtering of the file format / compression
to run on.
This change updates run-workload to provide a more generic interface for query
execution. Now the query executor just takes an execution function and a new
QueryExecOptions object that defines the values to use for execution.
I also made a change to store partial result sets so we can salvage some work if
a run fails.
Now we save Hive results into a separate file (previously everything was stored
in the same file. Also added ability to do a run-benchmark and specify to skip
impala and which will help generate hive reference results.
Updated the reporting script to reflect this change.
This improves the summary reporting for perf results, fixes a problem with how the short query names were being
stored, and also adds support for running multiple workloads of different scale factors.
This change add a -num_clients flag that specifies the number of clients
(threads) to use when executing each query in a workload. This is used to
validate Impala concurrency/stress. The logging was getting messed up with
multiple threads so I also updated this to use the logger module.
Currently we only capture and save the results of the first thread that
executes. In the future we might want to update this to capture results from all
the threads.