Compare commits

..

112 Commits

Author SHA1 Message Date
github-actions[bot]
db4fdd003e Snapshot: 24.07.0-dev 2024-07-01 00:31:06 +00:00
Ezra Odio
4cb32fc1c3 Map() implementation fix for chart labels (#7022)
Co-authored-by: Ezra Odio <eodio@starfishstorage.com>
Co-authored-by: Eric Radman <eradman@starfishstorage.com>
2024-06-18 18:05:37 +00:00
dependabot[bot]
a6c728b99c Bump ws from 5.2.3 to 5.2.4 (#7021)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-18 02:25:45 +00:00
dependabot[bot]
01e036d0a9 Bump urllib3 from 1.26.18 to 1.26.19 (#7020)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-06-18 01:55:21 +00:00
Eric Radman
17fe69f551 PG: Only list tables where schema has USAGE permission (#7000)
This covers cases where partitioned tables are part of a schema that is
not accessible by the current user.

CREATE SCHEMA xyz;

CREATE TABLE xyz.tab (
   id bigint GENERATED ALWAYS AS IDENTITY,
   ts timestamp NOT NULL
) PARTITION BY LIST ((ts::date));

CREATE TABLE xyz.tab_default PARTITION OF xyz.tab DEFAULT;
2024-06-06 10:49:00 +10:00
Ezra Odio
bceaab0496 Update to Python 3.10 (#6991)
Updated from Python 3.8 to 3.10. Python 3.10 is the default for Ubuntu 22. This change necessitated upgrading to
SQLAlchemy_Utils 0.38.3, and importing the sort_query function from an older version of SQLAlchemy_Utils because it was dropped in newer versions.

Co-authored-by: Ezra Odio <eodio@starfishstorage.com>
2024-06-05 17:41:49 +10:00
Lucas Fernando Cardoso Nunes
70dd05916f ci: bot identity correction (#6997)
Signed-off-by: Lucas Fernando Cardoso Nunes <lucasfc.nunes@gmail.com>
2024-06-02 06:54:08 +10:00
github-actions
60a12e906e Snapshot: 24.06.0-dev 2024-06-01 00:27:53 +00:00
dependabot[bot]
ec051a8939 --- (#6981)
updated-dependencies:
- dependency-name: requests
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-21 06:50:52 +00:00
Arik Fraimovich
60d3c66a8b Merge pull request from GHSA-32fw-wc7f-7qg9 2024-05-18 07:36:29 -07:00
Justin Clift
bd4ba96c43 Typo fix in message (#6979) 2024-05-18 08:18:37 +00:00
Eric Radman
10a46fd33c Revert "show pg and athena column comments and table descriptions as antd tooltip if they are defined (#6582)" (#6971)
This reverts commit c12d45077a.

This commit did not sort tables properly by schema, then name
2024-05-16 11:28:42 +08:00
Eric Radman
c874eb6b11 Revert changes to job status (#6969)
"Query in queue" should switch to "Executing query", but does not.

Commands:

git revert --no-commit bd17662005
git revert --no-commit 5ac5d86f5e
vim tests/handlers/test_query_results.py
git add tests/handlers/test_query_results.py

Co-authored-by: Justin Clift <justin@postgresql.org>
2024-05-14 22:06:45 -04:00
Taehyung Lim
f3a323695f Bump pyodbc from 4.0.28 to 5.1.0 (#6962) 2024-05-14 16:26:28 +00:00
Eric Radman
408ba78bd0 Update MSSQL OBDC driver to v18 (#6968) 2024-05-15 01:55:32 +10:00
Eric Radman
58cc49bc88 Revert build (2 of 2) (#6967) 2024-05-14 12:30:48 +00:00
Eric Radman
753ea846ff Revert CI workflow (1 of 2) (#6965) 2024-05-14 21:54:51 +10:00
Taehyung Lim
1b946b59ec sync .nvmrc with workflow (#6958) 2024-05-10 21:32:52 +10:00
dependabot[bot]
4569191113 Bump jinja2 from 3.1.3 to 3.1.4 (#6951)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-07 07:14:20 +10:00
Justin Clift
62890c3ec4 Revert "Remove deprecated advocate package (#6944)"
This reverts commit bd115e7f5f, as
it turns out to be a useful security feature.

In order to remove this in a better way, we'll need to replace it
with something that provides equivalent functionality.
2024-05-07 03:20:05 +10:00
Andrii Chubatiuk
bd115e7f5f Remove deprecated advocate package (#6944) 2024-05-06 23:14:13 +10:00
Andrii Chubatiuk
bd17662005 Fixed error serialization (#6937)
* serialize errors

* lint fix

* cover successful case
2024-05-02 13:52:35 +00:00
Jason Cowley
b7f22b1896 Fix 'str' object has no attribute 'pop' error when parsing query (#6941) 2024-05-02 21:31:23 +10:00
Justin Clift
897c683980 pgautoupgrade now does multi-arch builds (#6939)
Thanks to substantial efforts by @andyundso, the Docker Hub
images for pgautoupgrade are now multi-arch (x86_64 and ARM64). :)
2024-05-01 23:03:06 +10:00
github-actions
2b974e12ed Snapshot: 24.05.0-dev 2024-05-01 00:26:34 +00:00
SeongTae Jeong
372adfed6b Downgrade 'codecov-action' version from v4 to v3 (#6930)
The 'codecov-action@v4' requires an organization-level upload token, not
a single repo upload token, so we're temporarily downgrading it until we
can generate an organization-level upload token.

Reference: https://github.com/codecov/codecov-action/issues/1273
2024-04-26 08:28:20 +00:00
Eric Radman
dbab9cadb4 Source .env when running docker containers (#6927)
Restore previous functionality.

Ensure .env exists before building server.

Co-authored-by: github-actions <github-actions@github.com>
2024-04-25 11:36:03 -04:00
Kim Yann
06244716e6 Flatten all level for MongoDB data source (#6844) 2024-04-25 11:37:35 +00:00
Luciano Vitti
f09760389a aggregate Y column values rather than displaying last Y value (#6908) 2024-04-25 06:21:31 +00:00
Eric Radman
84e6d3cad5 Use staticPath var to fetch unsupportedRedirect.js (#6923)
Use Webpack configuration for locating this asset in the same way that
client/app/index.html does.

This code path is when REDASH_MULTI_ORG=true.

Co-authored-by: github-actions <github-actions@github.com>
2024-04-24 10:57:45 +00:00
Andrii Chubatiuk
3399e3761e mssql-odbc-arm64 (#6924)
Co-authored-by: Peter Lee <yankeeguyu@gmail.com>
2024-04-24 10:05:07 +00:00
Peter Lee
1c48b2218b Update widgets.py (#6926) 2024-04-24 19:37:35 +10:00
Andrii Chubatiuk
5ac5d86f5e consistent rq status naming and handling (#6913)
* consistent rq status naming and handling

* test fix

* make scheduled and deferred statuses cancelable
2024-04-24 13:15:04 +10:00
Marko Stankovic
5e4764af9c bugfix: unable to parse elasticsearch index mappings (#6918) 2024-04-23 18:13:05 +08:00
Eric Radman
e2a39de7d1 Remove workaround from check_csrf() (#6919)
This code was supposed to be temporary, and raises an exception if REDASH_MULTI_ORG=true is set.
2024-04-23 13:14:45 +10:00
Justin Clift
6c68b48917 Add pydeps Makefile target for installing Python dependencies (#6890)
This combines the manual steps needed for installing the Python dependencies into a single Makefile target.
2024-04-18 22:35:01 +10:00
Andrii Chubatiuk
7e8a61c73d Rq upgrade (#6902)
* fix(aws-es): fixed es auth

* fixed lock

* rq v1.16
2024-04-17 17:46:32 +10:00
dependabot[bot]
991e94dd6a Bump gunicorn from 21.2.0 to 22.0.0 (#6900)
Bumps [gunicorn](https://github.com/benoitc/gunicorn) from 21.2.0 to 22.0.0.
- [Release notes](https://github.com/benoitc/gunicorn/releases)
- [Commits](https://github.com/benoitc/gunicorn/compare/21.2.0...22.0.0)

---
updated-dependencies:
- dependency-name: gunicorn
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-17 14:00:59 +10:00
Andrii Chubatiuk
2ffeecb813 fix: aws elasticsearch typo (#6899)
Co-authored-by: Peter Lee <yankeeguyu@gmail.com>
2024-04-17 02:49:39 +00:00
dependabot[bot]
3dd855aef1 Bump sqlparse from 0.4.4 to 0.5.0 (#6895)
Bumps [sqlparse](https://github.com/andialbrecht/sqlparse) from 0.4.4 to 0.5.0.
- [Changelog](https://github.com/andialbrecht/sqlparse/blob/master/CHANGELOG)
- [Commits](https://github.com/andialbrecht/sqlparse/compare/0.4.4...0.5.0)

---
updated-dependencies:
- dependency-name: sqlparse
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-16 13:06:57 +10:00
Justin Clift
713aca440a Extend make up to automatically initialise the database (#6855) 2024-04-13 14:47:41 +10:00
dependabot[bot]
70bb684d9e Bump dnspython from 2.4.2 to 2.6.1 (#6886)
Bumps [dnspython](https://github.com/rthalley/dnspython) from 2.4.2 to 2.6.1.
- [Release notes](https://github.com/rthalley/dnspython/releases)
- [Changelog](https://github.com/rthalley/dnspython/blob/main/doc/whatsnew.rst)
- [Commits](https://github.com/rthalley/dnspython/compare/v2.4.2...v2.6.1)

---
updated-dependencies:
- dependency-name: dnspython
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-13 13:46:38 +10:00
dependabot[bot]
4034f791c3 Bump pymongo from 4.3.3 to 4.6.3 (#6863)
Bumps [pymongo](https://github.com/mongodb/mongo-python-driver) from 4.3.3 to 4.6.3.
- [Release notes](https://github.com/mongodb/mongo-python-driver/releases)
- [Changelog](https://github.com/mongodb/mongo-python-driver/blob/master/doc/changelog.rst)
- [Commits](https://github.com/mongodb/mongo-python-driver/compare/4.3.3...4.6.3)

---
updated-dependencies:
- dependency-name: pymongo
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-13 03:14:35 +00:00
Justin Clift
b9875a231b Improve the text displayed when using the command line (#6884)
This removes some debugging output, and makes an unexpected text
string useful by explaining what's happening.
2024-04-13 11:51:15 +10:00
Justin Clift
062a70cf20 Change default webUI port back to 5001 (#6883)
This PR changes the default (tcp) port for the web user interface back to port 5001.

The recent change to port 5000 (to match an old default) turned out to be more painful than it's worth.

So, lets keep using port 5001 after all.
2024-04-13 02:02:55 +10:00
Andrii Chubatiuk
c12d45077a show pg and athena column comments and table descriptions as antd tooltip if they are defined (#6582)
* show column comments by default for athena and postgres

* Restyled by prettier

* fixed typo

* fmt fix

* ordered imports

* fixed unit tests

* fixed tests for athena

---------

Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
Co-authored-by: Restyled.io <commits@restyled.io>
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-12 21:02:15 +10:00
snickerjp
6d6412753d Bump python-oracledb from 2.0.1 to 2.1.2 (#6881) 2024-04-12 10:33:16 +00:00
Andrii Chubatiuk
275e12e7c1 fix: unquote values in compose (#6882) 2024-04-12 20:05:24 +10:00
Andrii Chubatiuk
77d7508cee fixed local setup to run on ARM64 (#6877)
* fixed local setup to run on ARM64

* set local profile in makefile by default

* reverted compose comment for postgres command
2024-04-12 08:10:34 +00:00
Justin Clift
9601660751 Update Node image in Dockerfile to 18-bookworm 2024-04-12 15:08:56 +10:00
dependabot[bot]
45c6fa0591 Bump idna from 3.6 to 3.7 (#6878)
Bumps [idna](https://github.com/kjd/idna) from 3.6 to 3.7.
- [Release notes](https://github.com/kjd/idna/releases)
- [Changelog](https://github.com/kjd/idna/blob/master/HISTORY.rst)
- [Commits](https://github.com/kjd/idna/compare/v3.6...v3.7)

---
updated-dependencies:
- dependency-name: idna
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-12 14:58:04 +10:00
Andrii Chubatiuk
95ecb8e229 fix for coverage (#6872)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-11 05:35:14 +00:00
dependabot[bot]
cb0707176c Bump tar from 6.1.15 to 6.2.1 (#6866)
Bumps [tar](https://github.com/isaacs/node-tar) from 6.1.15 to 6.2.1.
- [Release notes](https://github.com/isaacs/node-tar/releases)
- [Changelog](https://github.com/isaacs/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/isaacs/node-tar/compare/v6.1.15...v6.2.1)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-11 06:47:08 +10:00
Andrii Chubatiuk
d7247f8b84 use default docker repo name if variable is not defined (#6870)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-11 06:15:43 +10:00
Andrii Chubatiuk
776703fab7 filter widget results to fix tests during repeatable execution (#6693)
* filter widged results to fix tests during repeatable execution

* minor fix

* made queryName variable

---------

Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-04-10 23:14:47 +10:00
Andrii Chubatiuk
34cde71238 fix percy for a branch (#6868)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-10 21:10:41 +10:00
Andrii Chubatiuk
f631075be3 reverted e2e secrets (#6867)
Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-10 20:16:31 +10:00
Andrii Chubatiuk
3f19534301 reuse built frontend in ci, merge compose files (#6674)
* reuse built frontend in ci, merge compose files

* pr comments

* added make create_db alias to create_database

* fixed lint

---------

Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
2024-04-10 19:53:14 +10:00
Justin Clift
24dec192ee Update yarn to current latest in 1.22.x series (#6858)
* Update yarn to current latest in 1.22.x series

* Use an environment variable for the yarn version

As suggested by @lucydodo:

  https://github.com/getredash/redash/pull/6858#discussion_r1555131358

Thanks heaps. :)
2024-04-08 01:52:45 +00:00
Ran Benita
82d88ed4eb Bump gunicorn from 20.0.4 to 21.2.0 (#6856)
The version 20.0.4 has a security issue:
https://grenfeldt.dev/2021/04/01/gunicorn-20.0.4-request-smuggling/

Changelog:
https://docs.gunicorn.org/en/stable/news.html
2024-04-08 06:26:02 +10:00
Justin Clift
af0773c58a Update "make clean" to remove Redash dev Docker images (#6847)
Also added a "make clean-all" target to remove the related containers
2024-04-07 12:14:38 +10:00
Justin Clift
15e6583d72 Automatically use the latest version of PostgreSQL (#6851) 2024-04-05 10:46:12 -04:00
Justin Clift
4eb5f4e47f Remove version check and all of the data sharing (#6852) 2024-04-06 00:02:31 +10:00
Eric Radman
a0f5c706ff Remove Qubole query runner (#6848)
The qds-sdk-py package along with the rest of the Qubole project is no longer
maintained:

3c6a34ce33

Removing this eliminates these warnings when running Redash management commands:

./qds_sdk/commands.py:1124: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if options.mode is "1":
./qds_sdk/commands.py:1137: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if options.db_update_mode is "updateonly":
./qds_sdk/commands.py:1424: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if (total is 0) or (downloaded == total):

Co-authored-by: github-actions <github-actions@github.com>
2024-04-04 07:52:18 +10:00
Will Lachance
702a550659 Handle timedelta in query results (#6846) 2024-04-03 15:44:08 +00:00
Eric Radman
38a06c7ab9 Autoformat hyperlinks in Slack alerts (#6845)
Format the Slack message using the "mrkdwn" type, which will make
hyperlinks clickable.

New test for Slack destination.

Co-authored-by: github-actions <github-actions@github.com>
2024-04-03 01:31:55 +00:00
Eric Radman
a6074878bb Use setup-python@v5, setup-node@v4 (#6842)
To avoid warnings in the CI pipeline

> Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20

Co-authored-by: github-actions <github-actions@github.com>
2024-04-01 23:37:37 +00:00
github-actions
fb348c7116 Snapshot: 24.04.0-dev 2024-04-01 00:27:13 +00:00
Will Lachance
24419863ec Handle decimal types in query results (#6837)
Since #6687, we don't serialize query results as JSON
before returning them. This is fine, except for the
query results data source which needs to pass the
data directly to sqlite3, and doesn't know how to
do that with the decimal types that are occasionally
returned by (at least) the PostgreSQL query runner:

https://www.psycopg.org/docs/faq.html#problems-with-type-conversions
2024-03-29 17:51:14 +10:00
dependabot[bot]
c4d3d9c683 Bump express from 4.18.2 to 4.19.2 (#6838)
Bumps [express](https://github.com/expressjs/express) from 4.18.2 to 4.19.2.
- [Release notes](https://github.com/expressjs/express/releases)
- [Changelog](https://github.com/expressjs/express/blob/master/History.md)
- [Commits](https://github.com/expressjs/express/compare/4.18.2...4.19.2)

---
updated-dependencies:
- dependency-name: express
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-29 17:14:56 +10:00
dependabot[bot]
1672cd9280 Bump webpack-dev-middleware from 5.3.3 to 5.3.4 (#6829)
Bumps [webpack-dev-middleware](https://github.com/webpack/webpack-dev-middleware) from 5.3.3 to 5.3.4.
- [Release notes](https://github.com/webpack/webpack-dev-middleware/releases)
- [Changelog](https://github.com/webpack/webpack-dev-middleware/blob/v5.3.4/CHANGELOG.md)
- [Commits](https://github.com/webpack/webpack-dev-middleware/compare/v5.3.3...v5.3.4)

---
updated-dependencies:
- dependency-name: webpack-dev-middleware
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-24 10:20:14 +10:00
Eric Radman
6575a6499a BigQuery: use default for useQueryAnnotation option (#6824)
This option may not be set after an upgrade

Co-authored-by: github-actions <github-actions@github.com>
2024-03-21 20:36:09 +00:00
dependabot[bot]
e360e4658e Bump jwcrypto from 1.5.1 to 1.5.6 (#6816)
Bumps [jwcrypto](https://github.com/latchset/jwcrypto) from 1.5.1 to 1.5.6.
- [Release notes](https://github.com/latchset/jwcrypto/releases)
- [Commits](https://github.com/latchset/jwcrypto/compare/v1.5.1...v1.5.6)

---
updated-dependencies:
- dependency-name: jwcrypto
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 07:40:23 +10:00
dependabot[bot]
107933c363 Bump follow-redirects from 1.15.5 to 1.15.6 in /viz-lib (#6813)
Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.15.5 to 1.15.6.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.15.5...v1.15.6)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Peter Lee <yankeeguyu@gmail.com>
2024-03-19 23:38:47 +10:00
Vladislav Denisov
667a696ca5 ClickHouse query runner: fixed error message (#6764)
* Snapshot: 23.11.0-dev

* Snapshot: 23.12.0-dev

* Snapshot: 24.01.0-dev

* Snapshot: 24.02.0-dev

* clickhouse: check for `exception` field in response

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Vladislav Denisov <denisov@sports.ru>
Co-authored-by: Guido Petri <18634426+guidopetri@users.noreply.github.com>
Co-authored-by: Peter Lee <yankeeguyu@gmail.com>
2024-03-18 02:15:49 +10:00
Robin Edwards
7d0d242072 schedule may not contain an until key (#6771) 2024-03-15 09:00:19 +10:00
Arun Govind M
d554136f70 fix: Uncaught rejection promise error in Edit Visualization Dialog Modal (#6794) 2024-03-07 10:01:51 +10:00
Stefan Negele
34723e2f3e Add RisingWave support (#6776) 2024-03-05 17:29:47 +08:00
dependabot[bot]
11794b3fe3 Bump es5-ext from 0.10.53 to 0.10.63 in /viz-lib (#6782)
Bumps [es5-ext](https://github.com/medikoo/es5-ext) from 0.10.53 to 0.10.63.
- [Release notes](https://github.com/medikoo/es5-ext/releases)
- [Changelog](https://github.com/medikoo/es5-ext/blob/main/CHANGELOG.md)
- [Commits](https://github.com/medikoo/es5-ext/compare/v0.10.53...v0.10.63)

---
updated-dependencies:
- dependency-name: es5-ext
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-02 04:46:01 +00:00
Eric Radman
3997916d77 Always push images to hub.docker.com/u/redash (#6792)
Using github.repository_owner name was convenient for testing this
action, but is not correct since account names do not match.

Git Hub: getredash/
Docker Hub: redash/

Co-authored-by: github-actions <github-actions@github.com>
2024-03-01 21:42:15 +00:00
dependabot[bot]
b09a2256dc Bump es5-ext from 0.10.53 to 0.10.63 (#6784)
Bumps [es5-ext](https://github.com/medikoo/es5-ext) from 0.10.53 to 0.10.63.
- [Release notes](https://github.com/medikoo/es5-ext/releases)
- [Changelog](https://github.com/medikoo/es5-ext/blob/main/CHANGELOG.md)
- [Commits](https://github.com/medikoo/es5-ext/compare/v0.10.53...v0.10.63)

---
updated-dependencies:
- dependency-name: es5-ext
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-01 18:51:54 +00:00
Eric Radman
95a45bb4dc Snapshot: 24.03.0-dev (#6791)
Co-authored-by: github-actions <github-actions@github.com>
2024-03-02 04:21:22 +10:00
Eric Radman
7cd03c797c Publish preview Docker image when release candidate is tagged (#6787)
* Only respond to new tags ending with -dev
* Use github account name to allow easier testing in a fork
* Allow preview image to be referenced by a specific tag, or by latest tag

redash/preview:24.02.0-dev
redash/redash:preview

Co-authored-by: github-actions <github-actions@github.com>
2024-03-01 16:05:12 +00:00
Eric Radman
1200f9887a Use SSH deployment key to bump version and tag release candiate (#6789)
To allow this workflow to run even though normal contributors
are required to create a pull request.

Steps:

1. Generate SSH key pair: ssh-keygen -t ed25519. No need for passphrases etc.
2. Add public key (.pub one) as a deploy key at Your repo -> Settings ->
   Security -> Deploy keys, check "Allow write access".
3. Add private key as a secret at Your repo -> Settings -> Security -> Secrets
   and variables -> Actions

https://stackoverflow.com/a/76135647/1809872

Co-authored-by: github-actions <github-actions@github.com>
2024-03-02 00:51:51 +10:00
Dirk van Donkelaar
81d22f1eb2 Add limit option for MSSQL query runner (#6704)
* Add limit option for MSSQL query runner

* Fixed linting errors
2024-02-27 06:16:54 +10:00
Eric Radman
2fe0326280 Allow WebPack configuration when building in Docker (#6780)
Co-authored-by: github-actions <github-actions@github.com>
2024-02-27 05:28:39 +10:00
Andrii Chubatiuk
094984f564 Node 18 (#6752)
* Snapshot: 24.02.0-dev

* node-18

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-02-22 22:10:20 +00:00
dependabot[bot]
52cd6ff006 Bump axios from 0.27.2 to 0.28.0 in /viz-lib (#6775)
Bumps [axios](https://github.com/axios/axios) from 0.27.2 to 0.28.0.
- [Release notes](https://github.com/axios/axios/releases)
- [Changelog](https://github.com/axios/axios/blob/v0.28.0/CHANGELOG.md)
- [Commits](https://github.com/axios/axios/compare/v0.27.2...v0.28.0)

---
updated-dependencies:
- dependency-name: axios
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-23 04:01:43 +10:00
Andrii Chubatiuk
939bec2114 Load custom encoder only when runner enabled (#6748)
* Snapshot: 24.02.0-dev

* load encoders only for enabled runners

* try importing within init

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
Co-authored-by: Guido Petri <18634426+guidopetri@users.noreply.github.com>
2024-02-09 21:49:59 -05:00
Andrii Chubatiuk
320fddfd52 Changed checkout commit (#6749)
* Snapshot: 24.02.0-dev

* changed checkout commit

---------

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-02-07 03:15:13 +10:00
Guido Petri
ab39283ae6 fix linting (#6745) 2024-02-05 19:47:25 -06:00
Eric Radman
6386905616 Revert example message for "Custom rule for hiding filter" (#6709)
Partialy reverts
 Hide filter components on shared pages
 https://github.com/getredash/redash/pull/6115

The "hide_filter" feature is incomplete, and the example text is
confusing, adds clutter to the share dashboard dialog.

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Guido Petri <18634426+guidopetri@users.noreply.github.com>
2024-02-05 20:38:17 -05:00
Andrii Chubatiuk
d986b976e5 fixed custom json encoders (#6741)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-02-05 20:34:45 -05:00
Lucas Fernando Cardoso Nunes
a600921c0b feat: avoid npm usage (#6742)
Signed-off-by: Lucas Fernando Cardoso Nunes <lucasfc.nunes@gmail.com>
2024-02-05 20:31:26 -05:00
Lucas Fernando Cardoso Nunes
af2f4af8a2 refactor: use docker compose recommendations (#5854)
Signed-off-by: Lucas Fernando Cardoso Nunes <lucasfc.nunes@gmail.com>
2024-02-04 07:22:40 -06:00
Eric Radman
49a5e74283 Snapshot: 24.02.0-dev (#6740)
Co-authored-by: github-actions <github-actions@github.com>
2024-02-03 01:03:46 +00:00
Andrii Chubatiuk
b98b5f2ba4 Switch to pull_request_target events to hide cypress secrets (#6716)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
Co-authored-by: Justin Clift <justin@postgresql.org>
2024-01-30 14:51:46 +10:00
Dirk van Donkelaar
d245ff7bb1 Fixed notification template (#6721)
* Fixed notification template

* Made if-clause equal to append

Like Slack and email notification

* Add custom_body attribute to discord test

* Add missing attribute
2024-01-29 22:15:42 -05:00
Andrii Chubatiuk
97db492531 Removed unused configuration class (#6682)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-01-18 21:50:27 +10:00
Dirk van Donkelaar
30e7392933 Fix UnicodeEncodeError for non-Latin characters (#6715) 2024-01-18 08:15:11 +00:00
Andrii Chubatiuk
a54171f2c2 cast query_results data back to text (#6713)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-01-18 11:13:47 +10:00
Robin Edwards
cd03da3260 Cast to JSONB type to match altered column type (#6707) 2024-01-16 02:23:35 +10:00
dependabot[bot]
4c47bef582 Bump jinja2 from 3.1.2 to 3.1.3 (#6700)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.2 to 3.1.3.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.2...3.1.3)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Justin Clift <justin@postgresql.org>
2024-01-12 10:08:25 +10:00
Andrii Chubatiuk
ec1c4d07de Removed pseudojson class, converted all options and other json columns to jsonb ones (#6687)
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-01-12 09:02:00 +10:00
Andrii Chubatiuk
4d5103978b Removed simplejson (#6685)
* removed simplejson

* minor fix

* fixed lint

---------

Co-authored-by: Andrii Chubatiuk <wachy@Andriis-MBP-2.lan>
Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-01-11 17:47:33 +10:00
dependabot[bot]
3c2c2786ed Bump follow-redirects from 1.15.2 to 1.15.4 in /viz-lib (#6699)
Bumps [follow-redirects](https://github.com/follow-redirects/follow-redirects) from 1.15.2 to 1.15.4.
- [Release notes](https://github.com/follow-redirects/follow-redirects/releases)
- [Commits](https://github.com/follow-redirects/follow-redirects/compare/v1.15.2...v1.15.4)

---
updated-dependencies:
- dependency-name: follow-redirects
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-10 21:15:45 +00:00
snickerjp
cd482e780a Bump python-oracledb from 1.4.0 to 2.0.1 (#6698) 2024-01-10 20:44:18 +00:00
Eric Radman
4d81c3148d Update query hash with parameters applied (#6683)
This allows queries with parameters to run on a schedule since the hash
used to update the query_result will match.

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Guido Petri <18634426+guidopetri@users.noreply.github.com>
2024-01-10 03:46:31 +00:00
Eric Radman
1b1b9bd98d Import all Plotly visualizations (#6673)
- Accessible using the Custom chart type
- Disable 'fs' and 'path' modules which are available in node, but not
  on the frontend.

Co-authored-by: github-actions <github-actions@github.com>
Co-authored-by: Guido Petri <18634426+guidopetri@users.noreply.github.com>
2024-01-07 20:19:58 +00:00
dependabot[bot]
473cf29c9f Bump pycryptodome from 3.19.0 to 3.19.1 (#6694)
Bumps [pycryptodome](https://github.com/Legrandin/pycryptodome) from 3.19.0 to 3.19.1.
- [Release notes](https://github.com/Legrandin/pycryptodome/releases)
- [Changelog](https://github.com/Legrandin/pycryptodome/blob/master/Changelog.rst)
- [Commits](https://github.com/Legrandin/pycryptodome/compare/v3.19.0...v3.19.1)

---
updated-dependencies:
- dependency-name: pycryptodome
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-06 10:33:30 +10:00
Andrii Chubatiuk
cbde237b12 removed explicit object inheritance (#6686)
* removed explicit object inheritance

* minor fix

* pr comments

---------

Co-authored-by: Andrew Chubatiuk <andrew.chubatiuk@motional.com>
2024-01-05 19:52:02 +09:00
Eric Radman
998dc31eb0 Snapshot: 24.01.0-dev (#6681)
Co-authored-by: github-actions <github-actions@github.com>
2024-01-03 00:43:01 +10:00
180 changed files with 2534 additions and 2121 deletions

View File

@@ -1,11 +1,11 @@
FROM cypress/browsers:node16.18.0-chrome90-ff88
FROM cypress/browsers:node18.12.0-chrome106-ff106
ENV APP /usr/src/app
WORKDIR $APP
COPY package.json yarn.lock .yarnrc $APP/
COPY viz-lib $APP/viz-lib
RUN npm install yarn@1.22.19 -g && yarn --frozen-lockfile --network-concurrency 1 > /dev/null
RUN npm install yarn@1.22.22 -g && yarn --frozen-lockfile --network-concurrency 1 > /dev/null
COPY . $APP

View File

@@ -1,4 +1,3 @@
version: '2.2'
services:
redash:
build: ../
@@ -19,7 +18,7 @@ services:
image: redis:7-alpine
restart: unless-stopped
postgres:
image: pgautoupgrade/pgautoupgrade:15-alpine3.8
image: pgautoupgrade/pgautoupgrade:latest
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
restart: unless-stopped
environment:

View File

@@ -1,4 +1,3 @@
version: "2.2"
x-redash-service: &redash-service
build:
context: ../
@@ -67,7 +66,7 @@ services:
image: redis:7-alpine
restart: unless-stopped
postgres:
image: pgautoupgrade/pgautoupgrade:15-alpine3.8
image: pgautoupgrade/pgautoupgrade:latest
command: "postgres -c fsync=off -c full_page_writes=off -c synchronous_commit=OFF"
restart: unless-stopped
environment:

View File

@@ -1,5 +1,4 @@
client/.tmp/
client/dist/
node_modules/
viz-lib/node_modules/
.tmp/

View File

@@ -3,17 +3,24 @@ on:
push:
branches:
- master
pull_request:
pull_request_target:
branches:
- master
env:
NODE_VERSION: 16.20.1
NODE_VERSION: 18
YARN_VERSION: 1.22.22
jobs:
backend-lint:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- if: github.event.pull_request.mergeable == 'false'
name: Exit if PR is not mergeable
run: exit 1
- uses: actions/checkout@v4
with:
fetch-depth: 1
- uses: actions/setup-python@v4
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-python@v5
with:
python-version: '3.8'
- run: sudo pip install black==23.1.0 ruff==0.0.287
@@ -24,14 +31,18 @@ jobs:
runs-on: ubuntu-22.04
needs: backend-lint
env:
COMPOSE_FILE: .ci/docker-compose.ci.yml
COMPOSE_FILE: .ci/compose.ci.yaml
COMPOSE_PROJECT_NAME: redash
COMPOSE_DOCKER_CLI_BUILD: 1
DOCKER_BUILDKIT: 1
steps:
- uses: actions/checkout@v3
- if: github.event.pull_request.mergeable == 'false'
name: Exit if PR is not mergeable
run: exit 1
- uses: actions/checkout@v4
with:
fetch-depth: 1
ref: ${{ github.event.pull_request.head.sha }}
- name: Build Docker Images
run: |
set -x
@@ -65,16 +76,20 @@ jobs:
frontend-lint:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- if: github.event.pull_request.mergeable == 'false'
name: Exit if PR is not mergeable
run: exit 1
- uses: actions/checkout@v4
with:
fetch-depth: 1
- uses: actions/setup-node@v3
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'yarn'
- name: Install Dependencies
run: |
npm install --global --force yarn@1.22.19
npm install --global --force yarn@$YARN_VERSION
yarn cache clean && yarn --frozen-lockfile --network-concurrency 1
- name: Run Lint
run: yarn lint:ci
@@ -88,16 +103,20 @@ jobs:
runs-on: ubuntu-22.04
needs: frontend-lint
steps:
- uses: actions/checkout@v3
- if: github.event.pull_request.mergeable == 'false'
name: Exit if PR is not mergeable
run: exit 1
- uses: actions/checkout@v4
with:
fetch-depth: 1
- uses: actions/setup-node@v3
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'yarn'
- name: Install Dependencies
run: |
npm install --global --force yarn@1.22.19
npm install --global --force yarn@$YARN_VERSION
yarn cache clean && yarn --frozen-lockfile --network-concurrency 1
- name: Run App Tests
run: yarn test
@@ -109,18 +128,22 @@ jobs:
runs-on: ubuntu-22.04
needs: frontend-lint
env:
COMPOSE_FILE: .ci/docker-compose.cypress.yml
COMPOSE_FILE: .ci/compose.cypress.yaml
COMPOSE_PROJECT_NAME: cypress
PERCY_TOKEN_ENCODED: ZGRiY2ZmZDQ0OTdjMzM5ZWE0ZGQzNTZiOWNkMDRjOTk4Zjg0ZjMxMWRmMDZiM2RjOTYxNDZhOGExMjI4ZDE3MA==
CYPRESS_PROJECT_ID_ENCODED: OTI0Y2th
CYPRESS_RECORD_KEY_ENCODED: YzA1OTIxMTUtYTA1Yy00NzQ2LWEyMDMtZmZjMDgwZGI2ODgx
CYPRESS_INSTALL_BINARY: 0
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD: 1
PERCY_TOKEN: ${{ secrets.PERCY_TOKEN }}
CYPRESS_PROJECT_ID: ${{ secrets.CYPRESS_PROJECT_ID }}
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
steps:
- uses: actions/checkout@v3
- if: github.event.pull_request.mergeable == 'false'
name: Exit if PR is not mergeable
run: exit 1
- uses: actions/checkout@v4
with:
fetch-depth: 1
- uses: actions/setup-node@v3
ref: ${{ github.event.pull_request.head.sha }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'yarn'
@@ -130,7 +153,7 @@ jobs:
echo "CODE_COVERAGE=true" >> "$GITHUB_ENV"
- name: Install Dependencies
run: |
npm install --global --force yarn@1.22.19
npm install --global --force yarn@$YARN_VERSION
yarn cache clean && yarn --frozen-lockfile --network-concurrency 1
- name: Setup Redash Server
run: |
@@ -150,89 +173,3 @@ jobs:
with:
name: coverage
path: coverage
build-skip-check:
runs-on: ubuntu-22.04
outputs:
skip: ${{ steps.skip-check.outputs.skip }}
steps:
- name: Skip?
id: skip-check
run: |
if [[ "${{ vars.DOCKER_USER }}" == '' ]]; then
echo 'Docker user is empty. Skipping build+push'
echo skip=true >> "$GITHUB_OUTPUT"
elif [[ "${{ secrets.DOCKER_PASS }}" == '' ]]; then
echo 'Docker password is empty. Skipping build+push'
echo skip=true >> "$GITHUB_OUTPUT"
elif [[ "${{ github.ref_name }}" != 'master' ]]; then
echo 'Ref name is not `master`. Skipping build+push'
echo skip=true >> "$GITHUB_OUTPUT"
else
echo 'Docker user and password are set and branch is `master`.'
echo 'Building + pushing `preview` image.'
echo skip=false >> "$GITHUB_OUTPUT"
fi
build-docker-image:
runs-on: ubuntu-22.04
needs:
- backend-unit-tests
- frontend-unit-tests
- frontend-e2e-tests
- build-skip-check
if: needs.build-skip-check.outputs.skip == 'false'
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 1
- uses: actions/setup-node@v3
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'yarn'
- name: Install Dependencies
run: |
npm install --global --force yarn@1.22.19
yarn cache clean && yarn --frozen-lockfile --network-concurrency 1
- name: Set up QEMU
timeout-minutes: 1
uses: docker/setup-qemu-action@v2.2.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ vars.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASS }}
- name: Bump version
id: version
run: |
set -x
.ci/update_version
VERSION=$(jq -r .version package.json)
VERSION_TAG="${VERSION}.b${GITHUB_RUN_ID}.${GITHUB_RUN_NUMBER}"
echo "VERSION_TAG=$VERSION_TAG" >> "$GITHUB_OUTPUT"
- name: Build and push preview image to Docker Hub
uses: docker/build-push-action@v4
with:
push: true
tags: |
redash/redash:preview
redash/preview:${{ steps.version.outputs.VERSION_TAG }}
context: .
build-args: |
test_all_deps=true
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64
env:
DOCKER_CONTENT_TRUST: true
- name: "Failure: output container logs to console"
if: failure()
run: docker compose logs

View File

@@ -3,7 +3,7 @@ name: Periodic Snapshot
# 10 minutes after midnight on the first of every month
on:
schedule:
- cron: "10 0 1 * *"
- cron: '10 0 1 * *'
permissions:
contents: write
@@ -13,14 +13,18 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ssh-key: ${{secrets.ACTION_PUSH_KEY}}
- run: |
date="$(date +%y.%m).0-dev"
gawk -i inplace -F: -v q=\" -v tag=$date '/^ "version": / { print $1 FS, q tag q ","; next} { print }' package.json
gawk -i inplace -F= -v q=\" -v tag=$date '/^__version__ =/ { print $1 FS, q tag q; next} { print }' redash/__init__.py
gawk -i inplace -F= -v q=\" -v tag=$date '/^version =/ { print $1 FS, q tag q; next} { print }' pyproject.toml
git config user.name github-actions
git config user.email github-actions@github.com
# https://api.github.com/users/github-actions[bot]
git config user.name 'github-actions[bot]'
git config user.email '41898282+github-actions[bot]@users.noreply.github.com'
TAG_NAME="$(date +%y.%m).0-dev"
gawk -i inplace -F: -v q=\" -v tag=${TAG_NAME} '/^ "version": / { print $1 FS, q tag q ","; next} { print }' package.json
gawk -i inplace -F= -v q=\" -v tag=${TAG_NAME} '/^__version__ =/ { print $1 FS, q tag q; next} { print }' redash/__init__.py
gawk -i inplace -F= -v q=\" -v tag=${TAG_NAME} '/^version =/ { print $1 FS, q tag q; next} { print }' pyproject.toml
git add package.json redash/__init__.py pyproject.toml
git commit -m "Snapshot: ${date}"
git tag $date
git push --atomic origin master refs/tags/$date
git commit -m "Snapshot: ${TAG_NAME}"
git tag ${TAG_NAME}
git push --atomic origin master refs/tags/${TAG_NAME}

87
.github/workflows/preview-image.yml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: Preview Image
on:
push:
tags:
- '*-dev'
env:
NODE_VERSION: 18
jobs:
build-skip-check:
runs-on: ubuntu-22.04
outputs:
skip: ${{ steps.skip-check.outputs.skip }}
steps:
- name: Skip?
id: skip-check
run: |
if [[ "${{ vars.DOCKER_USER }}" == '' ]]; then
echo 'Docker user is empty. Skipping build+push'
echo skip=true >> "$GITHUB_OUTPUT"
elif [[ "${{ secrets.DOCKER_PASS }}" == '' ]]; then
echo 'Docker password is empty. Skipping build+push'
echo skip=true >> "$GITHUB_OUTPUT"
else
echo 'Docker user and password are set and branch is `master`.'
echo 'Building + pushing `preview` image.'
echo skip=false >> "$GITHUB_OUTPUT"
fi
build-docker-image:
runs-on: ubuntu-22.04
needs:
- build-skip-check
if: needs.build-skip-check.outputs.skip == 'false'
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 1
ref: ${{ github.event.push.after }}
- uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: 'yarn'
- name: Install Dependencies
run: |
npm install --global --force yarn@1.22.22
yarn cache clean && yarn --frozen-lockfile --network-concurrency 1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ vars.DOCKER_USER }}
password: ${{ secrets.DOCKER_PASS }}
- name: Set version
id: version
run: |
set -x
.ci/update_version
VERSION_TAG=$(jq -r .version package.json)
echo "VERSION_TAG=$VERSION_TAG" >> "$GITHUB_OUTPUT"
- name: Build and push preview image to Docker Hub
uses: docker/build-push-action@v4
with:
push: true
tags: |
redash/redash:preview
redash/preview:${{ steps.version.outputs.VERSION_TAG }}
context: .
build-args: |
test_all_deps=true
cache-from: type=gha
cache-to: type=gha,mode=max
platforms: linux/amd64
env:
DOCKER_CONTENT_TRUST: true
- name: "Failure: output container logs to console"
if: failure()
run: docker compose logs

1
.npmrc Normal file
View File

@@ -0,0 +1 @@
engine-strict = true

2
.nvmrc
View File

@@ -1 +1 @@
v16.20.1
v18

View File

@@ -1,6 +1,6 @@
FROM node:16.20.1-bookworm as frontend-builder
FROM node:18-bookworm as frontend-builder
RUN npm install --global --force yarn@1.22.19
RUN npm install --global --force yarn@1.22.22
# Controls whether to build the frontend assets
ARG skip_frontend_build
@@ -14,6 +14,7 @@ USER redash
WORKDIR /frontend
COPY --chown=redash package.json yarn.lock .yarnrc /frontend/
COPY --chown=redash viz-lib /frontend/viz-lib
COPY --chown=redash scripts /frontend/scripts
# Controls whether to instrument code for coverage information
ARG code_coverage
@@ -25,7 +26,7 @@ COPY --chown=redash client /frontend/client
COPY --chown=redash webpack.config.js /frontend/
RUN if [ "x$skip_frontend_build" = "x" ] ; then yarn build; else mkdir -p /frontend/client/dist && touch /frontend/client/dist/multi_org.html && touch /frontend/client/dist/index.html; fi
FROM python:3.8-slim-bookworm
FROM python:3.10-slim-bookworm
EXPOSE 5000
@@ -67,7 +68,7 @@ RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor -o /usr/share/keyrings/microsoft-prod.gpg \
&& curl https://packages.microsoft.com/config/debian/12/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install -y --no-install-recommends msodbcsql17 \
&& ACCEPT_EULA=Y apt-get install -y --no-install-recommends msodbcsql18 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& curl "$databricks_odbc_driver_url" --location --output /tmp/simba_odbc.zip \

View File

@@ -1,4 +1,4 @@
.PHONY: compose_build up test_db create_database clean down tests lint backend-unit-tests frontend-unit-tests test build watch start redis-cli bash
.PHONY: compose_build up test_db create_database clean clean-all down tests lint backend-unit-tests frontend-unit-tests test build watch start redis-cli bash
compose_build: .env
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 docker compose build
@@ -17,7 +17,21 @@ create_database: .env
docker compose run server create_db
clean:
docker compose down && docker compose rm
docker compose down
docker compose --project-name cypress down
docker compose rm --stop --force
docker compose --project-name cypress rm --stop --force
docker image rm --force \
cypress-server:latest cypress-worker:latest cypress-scheduler:latest \
redash-server:latest redash-worker:latest redash-scheduler:latest
docker container prune --force
docker image prune --force
docker volume prune --force
clean-all: clean
docker image rm --force \
redash/redash:10.1.0.b50633 redis:7-alpine maildev/maildev:latest \
pgautoupgrade/pgautoupgrade:15-alpine3.8 pgautoupgrade/pgautoupgrade:latest
down:
docker compose down

View File

@@ -84,6 +84,7 @@ Redash supports more than 35 SQL and NoSQL [data sources](https://redash.io/help
- Python
- Qubole
- Rockset
- RisingWave
- Salesforce
- ScyllaDB
- Shell Scripts

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.7 KiB

View File

@@ -1,6 +1,5 @@
import React from "react";
import Link from "@/components/Link";
import { clientConfig, currentUser } from "@/services/auth";
import { clientConfig } from "@/services/auth";
import frontendVersion from "@/version.json";
export default function VersionInfo() {
@@ -10,15 +9,6 @@ export default function VersionInfo() {
Version: {clientConfig.version}
{frontendVersion !== clientConfig.version && ` (${frontendVersion.substring(0, 8)})`}
</div>
{clientConfig.newVersionAvailable && currentUser.hasPermission("super_admin") && (
<div className="m-t-10">
{/* eslint-disable react/jsx-no-target-blank */}
<Link href="https://version.redash.io/" className="update-available" target="_blank" rel="noopener">
Update Available <i className="fa fa-external-link m-l-5" aria-hidden="true" />
<span className="sr-only">(opens in a new tab)</span>
</Link>
</div>
)}
</React.Fragment>
);
}

View File

@@ -1,79 +0,0 @@
import React, { useState } from "react";
import Card from "antd/lib/card";
import Button from "antd/lib/button";
import Typography from "antd/lib/typography";
import { clientConfig } from "@/services/auth";
import Link from "@/components/Link";
import HelpTrigger from "@/components/HelpTrigger";
import DynamicComponent from "@/components/DynamicComponent";
import OrgSettings from "@/services/organizationSettings";
const Text = Typography.Text;
function BeaconConsent() {
const [hide, setHide] = useState(false);
if (!clientConfig.showBeaconConsentMessage || hide) {
return null;
}
const hideConsentCard = () => {
clientConfig.showBeaconConsentMessage = false;
setHide(true);
};
const confirmConsent = confirm => {
let message = "🙏 Thank you.";
if (!confirm) {
message = "Settings Saved.";
}
OrgSettings.save({ beacon_consent: confirm }, message)
// .then(() => {
// // const settings = get(response, 'settings');
// // this.setState({ settings, formValues: { ...settings } });
// })
.finally(hideConsentCard);
};
return (
<DynamicComponent name="BeaconConsent">
<div className="m-t-10 tiled">
<Card
title={
<>
Would you be ok with sharing anonymous usage data with the Redash team?{" "}
<HelpTrigger type="USAGE_DATA_SHARING" />
</>
}
bordered={false}>
<Text>Help Redash improve by automatically sending anonymous usage data:</Text>
<div className="m-t-5">
<ul>
<li> Number of users, queries, dashboards, alerts, widgets and visualizations.</li>
<li> Types of data sources, alert destinations and visualizations.</li>
</ul>
</div>
<Text>All data is aggregated and will never include any sensitive or private data.</Text>
<div className="m-t-5">
<Button type="primary" className="m-r-5" onClick={() => confirmConsent(true)}>
Yes
</Button>
<Button type="default" onClick={() => confirmConsent(false)}>
No
</Button>
</div>
<div className="m-t-15">
<Text type="secondary">
You can change this setting anytime from the{" "}
<Link href="settings/organization">Organization Settings</Link> page.
</Text>
</div>
</Card>
</div>
</DynamicComponent>
);
}
export default BeaconConsent;

View File

@@ -23,7 +23,6 @@ export const TYPES = mapValues(
VALUE_SOURCE_OPTIONS: ["/user-guide/querying/query-parameters#Value-Source-Options", "Guide: Value Source Options"],
SHARE_DASHBOARD: ["/user-guide/dashboards/sharing-dashboards", "Guide: Sharing and Embedding Dashboards"],
AUTHENTICATION_OPTIONS: ["/user-guide/users/authentication-options", "Guide: Authentication Options"],
USAGE_DATA_SHARING: ["/open-source/admin-guide/usage-data", "Help: Anonymous Usage Data Sharing"],
DS_ATHENA: ["/data-sources/amazon-athena-setup", "Guide: Help Setting up Amazon Athena"],
DS_BIGQUERY: ["/data-sources/bigquery-setup", "Guide: Help Setting up BigQuery"],
DS_URL: ["/data-sources/querying-urls", "Guide: Help Setting up URL"],

View File

@@ -148,7 +148,9 @@ function EditVisualizationDialog({ dialog, visualization, query, queryResult })
function dismiss() {
const optionsChanged = !isEqual(options, defaultState.originalOptions);
confirmDialogClose(nameChanged || optionsChanged).then(dialog.dismiss);
confirmDialogClose(nameChanged || optionsChanged)
.then(dialog.dismiss)
.catch(() => {});
}
// When editing existing visualization chart type selector is disabled, so add only existing visualization's

View File

@@ -5,7 +5,7 @@
<meta charset="UTF-8" />
<base href="{{base_href}}" />
<title><%= htmlWebpackPlugin.options.title %></title>
<script src="/static/unsupportedRedirect.js" async></script>
<script src="<%= htmlWebpackPlugin.options.staticPath %>unsupportedRedirect.js" async></script>
<link rel="icon" type="image/png" sizes="32x32" href="/static/images/favicon-32x32.png" />
<link rel="icon" type="image/png" sizes="96x96" href="/static/images/favicon-96x96.png" />

View File

@@ -118,28 +118,9 @@ class ShareDashboardDialog extends React.Component {
/>
</Form.Item>
{dashboard.public_url && (
<>
<Form.Item>
<Alert
message={
<div>
Custom rule for hiding filter components when sharing links:
<br />
You can hide filter components by appending `&hide_filter={"{{"} component_name{"}}"}` to the
sharing URL.
<br />
Example: http://{"{{"}ip{"}}"}:{"{{"}port{"}}"}/public/dashboards/{"{{"}id{"}}"}
?p_country=ghana&p_site=10&hide_filter=country
</div>
}
type="warning"
/>
</Form.Item>
<Form.Item label="Secret address" {...this.formItemProps}>
<InputWithCopy value={dashboard.public_url} data-test="SecretAddress" />
</Form.Item>
</>
<Form.Item label="Secret address" {...this.formItemProps}>
<InputWithCopy value={dashboard.public_url} data-test="SecretAddress" />
</Form.Item>
)}
</Form>
</Modal>

View File

@@ -6,7 +6,6 @@ import Link from "@/components/Link";
import routeWithUserSession from "@/components/ApplicationArea/routeWithUserSession";
import EmptyState, { EmptyStateHelpMessage } from "@/components/empty-state/EmptyState";
import DynamicComponent from "@/components/DynamicComponent";
import BeaconConsent from "@/components/BeaconConsent";
import PlainButton from "@/components/PlainButton";
import { axios } from "@/services/axios";
@@ -89,7 +88,6 @@ export default function Home() {
</DynamicComponent>
<DynamicComponent name="HomeExtra" />
<DashboardAndQueryFavoritesList />
<BeaconConsent />
</div>
</div>
);

View File

@@ -1,38 +0,0 @@
import React from "react";
import Form from "antd/lib/form";
import Checkbox from "antd/lib/checkbox";
import Skeleton from "antd/lib/skeleton";
import HelpTrigger from "@/components/HelpTrigger";
import DynamicComponent from "@/components/DynamicComponent";
import { SettingsEditorPropTypes, SettingsEditorDefaultProps } from "../prop-types";
export default function BeaconConsentSettings(props) {
const { values, onChange, loading } = props;
return (
<DynamicComponent name="OrganizationSettings.BeaconConsentSettings" {...props}>
<Form.Item
label={
<span>
Anonymous Usage Data Sharing
<HelpTrigger className="m-l-5 m-r-5" type="USAGE_DATA_SHARING" />
</span>
}>
{loading ? (
<Skeleton title={{ width: 300 }} paragraph={false} active />
) : (
<Checkbox
name="beacon_consent"
checked={values.beacon_consent}
onChange={e => onChange({ beacon_consent: e.target.checked })}>
Help Redash improve by automatically sending anonymous usage data
</Checkbox>
)}
</Form.Item>
</DynamicComponent>
);
}
BeaconConsentSettings.propTypes = SettingsEditorPropTypes;
BeaconConsentSettings.defaultProps = SettingsEditorDefaultProps;

View File

@@ -4,7 +4,6 @@ import DynamicComponent from "@/components/DynamicComponent";
import FormatSettings from "./FormatSettings";
import PlotlySettings from "./PlotlySettings";
import FeatureFlagsSettings from "./FeatureFlagsSettings";
import BeaconConsentSettings from "./BeaconConsentSettings";
export default function GeneralSettings(props) {
return (
@@ -14,7 +13,6 @@ export default function GeneralSettings(props) {
<FormatSettings {...props} />
<PlotlySettings {...props} />
<FeatureFlagsSettings {...props} />
<BeaconConsentSettings {...props} />
</DynamicComponent>
);
}

View File

@@ -1,6 +1,5 @@
/* eslint-disable import/no-extraneous-dependencies, no-console */
const { find } = require("lodash");
const atob = require("atob");
const { execSync } = require("child_process");
const { get, post } = require("request").defaults({ jar: true });
const { seedData } = require("./seed-data");
@@ -60,23 +59,11 @@ function stopServer() {
function runCypressCI() {
const {
PERCY_TOKEN_ENCODED,
CYPRESS_PROJECT_ID_ENCODED,
CYPRESS_RECORD_KEY_ENCODED,
GITHUB_REPOSITORY,
CYPRESS_OPTIONS, // eslint-disable-line no-unused-vars
} = process.env;
if (GITHUB_REPOSITORY === "getredash/redash") {
if (PERCY_TOKEN_ENCODED) {
process.env.PERCY_TOKEN = atob(`${PERCY_TOKEN_ENCODED}`);
}
if (CYPRESS_PROJECT_ID_ENCODED) {
process.env.CYPRESS_PROJECT_ID = atob(`${CYPRESS_PROJECT_ID_ENCODED}`);
}
if (CYPRESS_RECORD_KEY_ENCODED) {
process.env.CYPRESS_RECORD_KEY = atob(`${CYPRESS_RECORD_KEY_ENCODED}`);
}
process.env.CYPRESS_OPTIONS = "--record";
}

24
compose.base.yaml Normal file
View File

@@ -0,0 +1,24 @@
services:
.redash:
build:
context: .
args:
FRONTEND_BUILD_MODE: ${FRONTEND_BUILD_MODE:-2}
INSTALL_GROUPS: ${INSTALL_GROUPS:-main,all_ds,dev}
volumes:
- $PWD:${SERVER_MOUNT:-/ignore}
command: manage version
environment:
REDASH_LOG_LEVEL: INFO
REDASH_REDIS_URL: redis://redis:6379/0
REDASH_DATABASE_URL: postgresql://postgres@postgres/postgres
REDASH_RATELIMIT_ENABLED: false
REDASH_MAIL_DEFAULT_SENDER: redash@example.com
REDASH_MAIL_SERVER: email
REDASH_MAIL_PORT: 1025
REDASH_ENFORCE_CSRF: true
REDASH_COOKIE_SECRET: ${REDASH_COOKIE_SECRET}
REDASH_SECRET_KEY: ${REDASH_SECRET_KEY}
REDASH_PRODUCTION: ${REDASH_PRODUCTION:-true}
env_file:
- .env

View File

@@ -1,6 +1,5 @@
# This configuration file is for the **development** setup.
# For a production example please refer to getredash/setup repository on GitHub.
version: "2.2"
x-redash-service: &redash-service
build:
context: .
@@ -53,7 +52,7 @@ services:
image: redis:7-alpine
restart: unless-stopped
postgres:
image: pgautoupgrade/pgautoupgrade:15-alpine3.8
image: pgautoupgrade/pgautoupgrade:latest
ports:
- "15432:5432"
# The following turns the DB into less durable, but gains significant performance improvements for the tests run (x3

View File

@@ -7,7 +7,7 @@ Create Date: 2020-12-23 21:35:32.766354
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import JSON
# revision identifiers, used by Alembic.
revision = '0ec979123ba4'
@@ -18,7 +18,7 @@ depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('dashboards', sa.Column('options', postgresql.JSON(astext_type=sa.Text()), server_default='{}', nullable=False))
op.add_column('dashboards', sa.Column('options', JSON(astext_type=sa.Text()), server_default='{}', nullable=False))
# ### end Alembic commands ###

View File

@@ -10,8 +10,7 @@ import json
from alembic import op
import sqlalchemy as sa
from sqlalchemy.sql import table
from redash.models import MutableDict, PseudoJSON
from redash.models import MutableDict
# revision identifiers, used by Alembic.
@@ -41,7 +40,7 @@ def upgrade():
"queries",
sa.Column(
"schedule",
MutableDict.as_mutable(PseudoJSON),
sa.Text(),
nullable=False,
server_default=json.dumps({}),
),
@@ -51,7 +50,7 @@ def upgrade():
queries = table(
"queries",
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("schedule", MutableDict.as_mutable(PseudoJSON)),
sa.Column("schedule", sa.Text()),
sa.Column("old_schedule", sa.String(length=10)),
)
@@ -85,7 +84,7 @@ def downgrade():
"queries",
sa.Column(
"old_schedule",
MutableDict.as_mutable(PseudoJSON),
sa.Text(),
nullable=False,
server_default=json.dumps({}),
),
@@ -93,8 +92,8 @@ def downgrade():
queries = table(
"queries",
sa.Column("schedule", MutableDict.as_mutable(PseudoJSON)),
sa.Column("old_schedule", MutableDict.as_mutable(PseudoJSON)),
sa.Column("schedule", sa.Text()),
sa.Column("old_schedule", sa.Text()),
)
op.execute(queries.update().values({"old_schedule": queries.c.schedule}))
@@ -106,7 +105,7 @@ def downgrade():
"queries",
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("schedule", sa.String(length=10)),
sa.Column("old_schedule", MutableDict.as_mutable(PseudoJSON)),
sa.Column("old_schedule", sa.Text()),
)
conn = op.get_bind()

View File

@@ -0,0 +1,135 @@
"""change type of json fields from varchar to json
Revision ID: 7205816877ec
Revises: 7ce5925f832b
Create Date: 2024-01-03 13:55:18.885021
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects.postgresql import JSONB, JSON
# revision identifiers, used by Alembic.
revision = '7205816877ec'
down_revision = '7ce5925f832b'
branch_labels = None
depends_on = None
def upgrade():
connection = op.get_bind()
op.alter_column('queries', 'options',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
nullable=True,
postgresql_using='options::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('queries', 'schedule',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
nullable=True,
postgresql_using='schedule::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('events', 'additional_properties',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
nullable=True,
postgresql_using='additional_properties::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('organizations', 'settings',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
nullable=True,
postgresql_using='settings::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('alerts', 'options',
existing_type=JSON(astext_type=sa.Text()),
type_=JSONB(astext_type=sa.Text()),
nullable=True,
postgresql_using='options::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('dashboards', 'options',
existing_type=JSON(astext_type=sa.Text()),
type_=JSONB(astext_type=sa.Text()),
postgresql_using='options::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('dashboards', 'layout',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
postgresql_using='layout::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('changes', 'change',
existing_type=JSON(astext_type=sa.Text()),
type_=JSONB(astext_type=sa.Text()),
postgresql_using='change::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('visualizations', 'options',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
postgresql_using='options::jsonb',
server_default=sa.text("'{}'::jsonb"))
op.alter_column('widgets', 'options',
existing_type=sa.Text(),
type_=JSONB(astext_type=sa.Text()),
postgresql_using='options::jsonb',
server_default=sa.text("'{}'::jsonb"))
def downgrade():
connection = op.get_bind()
op.alter_column('queries', 'options',
existing_type=JSONB(astext_type=sa.Text()),
type_=sa.Text(),
postgresql_using='options::text',
existing_nullable=True,
server_default=sa.text("'{}'::text"))
op.alter_column('queries', 'schedule',
existing_type=JSONB(astext_type=sa.Text()),
type_=sa.Text(),
postgresql_using='schedule::text',
existing_nullable=True,
server_default=sa.text("'{}'::text"))
op.alter_column('events', 'additional_properties',
existing_type=JSONB(astext_type=sa.Text()),
type_=sa.Text(),
postgresql_using='additional_properties::text',
existing_nullable=True,
server_default=sa.text("'{}'::text"))
op.alter_column('organizations', 'settings',
existing_type=JSONB(astext_type=sa.Text()),
type_=sa.Text(),
postgresql_using='settings::text',
existing_nullable=True,
server_default=sa.text("'{}'::text"))
op.alter_column('alerts', 'options',
existing_type=JSONB(astext_type=sa.Text()),
type_=JSON(astext_type=sa.Text()),
postgresql_using='options::json',
existing_nullable=True,
server_default=sa.text("'{}'::json"))
op.alter_column('dashboards', 'options',
existing_type=JSONB(astext_type=sa.Text()),
type_=JSON(astext_type=sa.Text()),
postgresql_using='options::json',
server_default=sa.text("'{}'::json"))
op.alter_column('dashboards', 'layout',
existing_type=JSONB(astext_type=sa.Text()),
type_=sa.Text(),
postgresql_using='layout::text',
server_default=sa.text("'{}'::text"))
op.alter_column('changes', 'change',
existing_type=JSONB(astext_type=sa.Text()),
type_=JSON(astext_type=sa.Text()),
postgresql_using='change::json',
server_default=sa.text("'{}'::json"))
op.alter_column('visualizations', 'options',
type_=sa.Text(),
existing_type=JSONB(astext_type=sa.Text()),
postgresql_using='options::text',
server_default=sa.text("'{}'::text"))
op.alter_column('widgets', 'options',
type_=sa.Text(),
existing_type=JSONB(astext_type=sa.Text()),
postgresql_using='options::text',
server_default=sa.text("'{}'::text"))

View File

@@ -7,10 +7,9 @@ Create Date: 2019-01-17 13:22:21.729334
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.sql import table
from redash.models import MutableDict, PseudoJSON
from redash.models import MutableDict
# revision identifiers, used by Alembic.
revision = "73beceabb948"
@@ -43,7 +42,7 @@ def upgrade():
queries = table(
"queries",
sa.Column("id", sa.Integer, primary_key=True),
sa.Column("schedule", MutableDict.as_mutable(PseudoJSON)),
sa.Column("schedule", sa.Text()),
)
conn = op.get_bind()

View File

@@ -6,7 +6,7 @@ Create Date: 2018-01-31 15:20:30.396533
"""
import simplejson
import json
from alembic import op
import sqlalchemy as sa
@@ -27,7 +27,7 @@ def upgrade():
dashboard_result = db.session.execute("SELECT id, layout FROM dashboards")
for dashboard in dashboard_result:
print(" Updating dashboard: {}".format(dashboard["id"]))
layout = simplejson.loads(dashboard["layout"])
layout = json.loads(dashboard["layout"])
print(" Building widgets map:")
widgets = {}
@@ -53,7 +53,7 @@ def upgrade():
if widget is None:
continue
options = simplejson.loads(widget["options"]) or {}
options = json.loads(widget["options"]) or {}
options["position"] = {
"row": row_index,
"col": column_index * column_size,
@@ -62,7 +62,7 @@ def upgrade():
db.session.execute(
"UPDATE widgets SET options=:options WHERE id=:id",
{"options": simplejson.dumps(options), "id": widget_id},
{"options": json.dumps(options), "id": widget_id},
)
dashboard_result.close()

View File

@@ -7,7 +7,7 @@ Create Date: 2019-01-31 09:21:31.517265
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import BYTEA
from sqlalchemy.sql import table
from sqlalchemy_utils.types.encrypted.encrypted_type import FernetEngine
@@ -15,10 +15,8 @@ from redash import settings
from redash.utils.configuration import ConfigurationContainer
from redash.models.types import (
EncryptedConfiguration,
Configuration,
MutableDict,
MutableList,
PseudoJSON,
)
# revision identifiers, used by Alembic.
@@ -31,7 +29,7 @@ depends_on = None
def upgrade():
op.add_column(
"data_sources",
sa.Column("encrypted_options", postgresql.BYTEA(), nullable=True),
sa.Column("encrypted_options", BYTEA(), nullable=True),
)
# copy values
@@ -46,7 +44,14 @@ def upgrade():
)
),
),
sa.Column("options", ConfigurationContainer.as_mutable(Configuration)),
sa.Column(
"options",
ConfigurationContainer.as_mutable(
EncryptedConfiguration(
sa.Text, settings.DATASOURCE_SECRET_KEY, FernetEngine
)
),
),
)
conn = op.get_bind()

View File

@@ -9,7 +9,7 @@ import re
from funcy import flatten, compact
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import ARRAY
from redash import models
# revision identifiers, used by Alembic.
@@ -21,10 +21,10 @@ depends_on = None
def upgrade():
op.add_column(
"dashboards", sa.Column("tags", postgresql.ARRAY(sa.Unicode()), nullable=True)
"dashboards", sa.Column("tags", ARRAY(sa.Unicode()), nullable=True)
)
op.add_column(
"queries", sa.Column("tags", postgresql.ARRAY(sa.Unicode()), nullable=True)
"queries", sa.Column("tags", ARRAY(sa.Unicode()), nullable=True)
)

View File

@@ -7,17 +7,14 @@ Create Date: 2020-12-14 21:42:48.661684
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import BYTEA
from sqlalchemy.sql import table
from sqlalchemy_utils.types.encrypted.encrypted_type import FernetEngine
from redash import settings
from redash.utils.configuration import ConfigurationContainer
from redash.models.base import key_type
from redash.models.types import (
EncryptedConfiguration,
Configuration,
)
from redash.models.types import EncryptedConfiguration
# revision identifiers, used by Alembic.
@@ -30,7 +27,7 @@ depends_on = None
def upgrade():
op.add_column(
"notification_destinations",
sa.Column("encrypted_options", postgresql.BYTEA(), nullable=True)
sa.Column("encrypted_options", BYTEA(), nullable=True)
)
# copy values
@@ -45,7 +42,14 @@ def upgrade():
)
),
),
sa.Column("options", ConfigurationContainer.as_mutable(Configuration)),
sa.Column(
"options",
ConfigurationContainer.as_mutable(
EncryptedConfiguration(
sa.Text, settings.DATASOURCE_SECRET_KEY, FernetEngine
)
),
),
)
conn = op.get_bind()

View File

@@ -7,7 +7,7 @@ Create Date: 2018-11-08 16:12:17.023569
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import JSON
# revision identifiers, used by Alembic.
revision = "e7f8a917aa8e"
@@ -21,7 +21,7 @@ def upgrade():
"users",
sa.Column(
"details",
postgresql.JSON(astext_type=sa.Text()),
JSON(astext_type=sa.Text()),
server_default="{}",
nullable=True,
),

View File

@@ -7,7 +7,7 @@ Create Date: 2022-01-31 15:24:16.507888
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import JSON, JSONB
from redash.models import db
@@ -23,8 +23,8 @@ def upgrade():
### commands auto generated by Alembic - please adjust! ###
op.alter_column('users', 'details',
existing_type=postgresql.JSON(astext_type=sa.Text()),
type_=postgresql.JSONB(astext_type=sa.Text()),
existing_type=JSON(astext_type=sa.Text()),
type_=JSONB(astext_type=sa.Text()),
existing_nullable=True,
existing_server_default=sa.text("'{}'::jsonb"))
### end Alembic commands ###
@@ -52,8 +52,8 @@ def downgrade():
connection.execute(update_query)
db.session.commit()
op.alter_column('users', 'details',
existing_type=postgresql.JSONB(astext_type=sa.Text()),
type_=postgresql.JSON(astext_type=sa.Text()),
existing_type=JSONB(astext_type=sa.Text()),
type_=JSON(astext_type=sa.Text()),
existing_nullable=True,
existing_server_default=sa.text("'{}'::json"))

View File

@@ -6,7 +6,7 @@
command = "cd ../ && yarn cache clean && yarn --frozen-lockfile --network-concurrency 1 && yarn build && cd ./client"
[build.environment]
NODE_VERSION = "16.20.1"
NODE_VERSION = "18"
NETLIFY_USE_YARN = "true"
YARN_VERSION = "1.22.19"
CYPRESS_INSTALL_BINARY = "0"

View File

@@ -1,20 +1,19 @@
{
"name": "redash-client",
"version": "24.01.0-dev",
"version": "24.07.0-dev",
"description": "The frontend part of Redash.",
"main": "index.js",
"scripts": {
"start": "npm-run-all --parallel watch:viz webpack-dev-server",
"clean": "rm -rf ./client/dist/",
"build:viz": "(cd viz-lib && yarn build:babel)",
"build": "yarn clean && yarn build:viz && NODE_ENV=production webpack",
"build:old-node-version": "yarn clean && NODE_ENV=production node --max-old-space-size=4096 node_modules/.bin/webpack",
"watch:app": "webpack watch --progress",
"build": "yarn clean && yarn build:viz && NODE_OPTIONS=--openssl-legacy-provider NODE_ENV=production webpack",
"watch:app": "NODE_OPTIONS=--openssl-legacy-provider webpack watch --progress",
"watch:viz": "(cd viz-lib && yarn watch:babel)",
"watch": "npm-run-all --parallel watch:*",
"webpack-dev-server": "webpack-dev-server",
"analyze": "yarn clean && BUNDLE_ANALYZER=on webpack",
"analyze:build": "yarn clean && NODE_ENV=production BUNDLE_ANALYZER=on webpack",
"analyze": "yarn clean && BUNDLE_ANALYZER=on NODE_OPTIONS=--openssl-legacy-provider webpack",
"analyze:build": "yarn clean && NODE_ENV=production BUNDLE_ANALYZER=on NODE_OPTIONS=--openssl-legacy-provider webpack",
"lint": "yarn lint:base --ext .js --ext .jsx --ext .ts --ext .tsx ./client",
"lint:fix": "yarn lint:base --fix --ext .js --ext .jsx --ext .ts --ext .tsx ./client",
"lint:base": "eslint --config ./client/.eslintrc.js --ignore-path ./client/.eslintignore",
@@ -34,7 +33,8 @@
"url": "git+https://github.com/getredash/redash.git"
},
"engines": {
"node": ">14.16.0 <17.0.0",
"node": ">16.0 <21.0",
"npm": "please-use-yarn",
"yarn": "^1.22.10"
},
"author": "Redash Contributors",
@@ -178,6 +178,10 @@
"viz-lib/**"
]
},
"browser": {
"fs": false,
"path": false
},
"//": "browserslist set to 'Async functions' compatibility",
"browserslist": [
"Edge >= 15",

1373
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,7 @@ force-exclude = '''
[tool.poetry]
name = "redash"
version = "24.01.0-dev"
version = "24.07.0-dev"
description = "Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data."
authors = ["Arik Fraimovich <arik@redash.io>"]
# to be added to/removed from the mailing list, please reach out to Arik via the above email or Discord
@@ -43,10 +43,10 @@ flask-wtf = "1.1.1"
funcy = "1.13"
gevent = "23.9.1"
greenlet = "2.0.2"
gunicorn = "20.0.4"
gunicorn = "22.0.0"
httplib2 = "0.19.0"
itsdangerous = "2.1.2"
jinja2 = "3.1.2"
jinja2 = "3.1.4"
jsonschema = "3.1.1"
markupsafe = "2.1.1"
maxminddb-geolite2 = "2018.703"
@@ -64,28 +64,28 @@ pytz = ">=2019.3"
pyyaml = "6.0.1"
redis = "4.6.0"
regex = "2023.8.8"
requests = "2.31.0"
requests = "2.32.0"
restrictedpython = "6.2"
rq = "1.9.0"
rq-scheduler = "0.11.0"
rq = "1.16.1"
rq-scheduler = "0.13.1"
semver = "2.8.1"
sentry-sdk = "1.28.1"
simplejson = "3.16.0"
sqlalchemy = "1.3.24"
sqlalchemy-searchable = "1.2.0"
sqlalchemy-utils = "0.34.2"
sqlparse = "0.4.4"
sqlalchemy-utils = "0.38.3"
sqlparse = "0.5.0"
sshtunnel = "0.1.5"
statsd = "3.3.0"
supervisor = "4.1.0"
supervisor-checks = "0.8.1"
ua-parser = "0.18.0"
urllib3 = "1.26.18"
urllib3 = "1.26.19"
user-agents = "2.0"
werkzeug = "2.3.8"
wtforms = "2.2.1"
xlsxwriter = "1.2.2"
tzlocal = "4.3.1"
pyodbc = "5.1.0"
[tool.poetry.group.all_ds]
optional = true
@@ -111,7 +111,7 @@ nzalchemy = "^11.0.2"
nzpy = ">=1.15"
oauth2client = "4.1.3"
openpyxl = "3.0.7"
oracledb = "1.4.0"
oracledb = "2.1.2"
pandas = "1.3.4"
phoenixdb = "0.7"
pinotdb = ">=0.4.5"
@@ -122,12 +122,11 @@ pydruid = "0.5.7"
pyexasol = "0.12.0"
pyhive = "0.6.1"
pyignite = "0.6.1"
pymongo = { version = "4.3.3", extras = ["srv", "tls"] }
pymongo = { version = "4.6.3", extras = ["srv", "tls"] }
pymssql = "2.2.8"
pyodbc = "4.0.28"
pyodbc = "5.1.0"
python-arango = "6.1.0"
python-rapidjson = "1.1.0"
qds-sdk = ">=1.9.6"
requests-aws-sign = "0.1.5"
sasl = ">=0.1.3"
simple-salesforce = "0.74.3"
@@ -153,7 +152,7 @@ optional = true
pytest = "7.4.0"
coverage = "7.2.7"
freezegun = "1.2.1"
jwcrypto = "1.5.1"
jwcrypto = "1.5.6"
mock = "5.0.2"
pre-commit = "3.3.3"
ptpython = "3.0.23"
@@ -169,7 +168,7 @@ build-backend = "poetry.core.masonry.api"
[tool.ruff]
exclude = [".git", "viz-lib", "node_modules", "migrations"]
ignore = ["E501"]
select = ["C9", "E", "F", "W", "I001"]
select = ["C9", "E", "F", "W", "I001", "UP004"]
[tool.ruff.mccabe]
max-complexity = 15

View File

@@ -14,7 +14,7 @@ from redash.app import create_app # noqa
from redash.destinations import import_destinations
from redash.query_runner import import_query_runners
__version__ = "24.01.0-dev"
__version__ = "24.07.0-dev"
if os.environ.get("REMOTE_DEBUG"):

View File

@@ -36,14 +36,10 @@ def create_app():
from .metrics import request as request_metrics
from .models import db, users
from .utils import sentry
from .version_check import reset_new_version_status
sentry.init()
app = Redash()
# Check and update the cached version for use by the client
reset_new_version_status()
security.init_app(app)
request_metrics.init_app(app)
db.init_app(app)

View File

@@ -1,8 +1,8 @@
import json
import logging
import jwt
import requests
import simplejson
logger = logging.getLogger("jwt_auth")
@@ -25,7 +25,7 @@ def get_public_key_from_net(url):
if "keys" in data:
public_keys = []
for key_dict in data["keys"]:
public_key = jwt.algorithms.RSAAlgorithm.from_jwk(simplejson.dumps(key_dict))
public_key = jwt.algorithms.RSAAlgorithm.from_jwk(json.dumps(key_dict))
public_keys.append(public_key)
get_public_keys.key_cache[url] = public_keys

View File

@@ -8,6 +8,7 @@ from redash import settings
try:
from ldap3 import Connection, Server
from ldap3.utils.conv import escape_filter_chars
except ImportError:
if settings.LDAP_LOGIN_ENABLED:
sys.exit(
@@ -69,6 +70,7 @@ def login(org_slug=None):
def auth_ldap_user(username, password):
clean_username = escape_filter_chars(username)
server = Server(settings.LDAP_HOST_URL, use_ssl=settings.LDAP_SSL)
if settings.LDAP_BIND_DN is not None:
conn = Connection(
@@ -83,7 +85,7 @@ def auth_ldap_user(username, password):
conn.search(
settings.LDAP_SEARCH_DN,
settings.LDAP_SEARCH_TEMPLATE % {"username": username},
settings.LDAP_SEARCH_TEMPLATE % {"username": clean_username},
attributes=[settings.LDAP_DISPLAY_NAME_KEY, settings.LDAP_EMAIL_KEY],
)

View File

@@ -1,5 +1,6 @@
import json
import click
import simplejson
from flask import current_app
from flask.cli import FlaskGroup, run_command, with_appcontext
from rq import Connection
@@ -53,7 +54,7 @@ def version():
@manager.command()
def status():
with Connection(rq_redis_connection):
print(simplejson.dumps(get_status(), indent=2))
print(json.dumps(get_status(), indent=2))
@manager.command()

View File

@@ -5,7 +5,7 @@ logger = logging.getLogger(__name__)
__all__ = ["BaseDestination", "register", "get_destination", "import_destinations"]
class BaseDestination(object):
class BaseDestination:
deprecated = False
def __init__(self, configuration):

View File

@@ -42,8 +42,8 @@ class Discord(BaseDestination):
"inline": True,
},
]
if alert.options.get("custom_body"):
fields.append({"name": "Description", "value": alert.options["custom_body"]})
if alert.custom_body:
fields.append({"name": "Description", "value": alert.custom_body})
if new_state == Alert.TRIGGERED_STATE:
if alert.options.get("custom_subject"):
text = alert.options["custom_subject"]

View File

@@ -26,13 +26,13 @@ class Slack(BaseDestination):
fields = [
{
"title": "Query",
"type": "mrkdwn",
"value": "{host}/queries/{query_id}".format(host=host, query_id=query.id),
"short": True,
},
{
"title": "Alert",
"type": "mrkdwn",
"value": "{host}/alerts/{alert_id}".format(host=host, alert_id=alert.id),
"short": True,
},
]
if alert.custom_body:
@@ -50,7 +50,7 @@ class Slack(BaseDestination):
payload = {"attachments": [{"text": text, "color": color, "fields": fields}]}
try:
resp = requests.post(options.get("url"), data=json_dumps(payload), timeout=5.0)
resp = requests.post(options.get("url"), data=json_dumps(payload).encode("utf-8"), timeout=5.0)
logging.warning(resp.text)
if resp.status_code != 200:
logging.error("Slack send ERROR. status_code => {status}".format(status=resp.status_code))

View File

@@ -15,7 +15,6 @@ from redash.authentication.account import (
)
from redash.handlers import routes
from redash.handlers.base import json_response, org_scoped_rule
from redash.version_check import get_latest_version
logger = logging.getLogger(__name__)
@@ -256,15 +255,11 @@ def number_format_config():
def client_config():
if not current_user.is_api_user() and current_user.is_authenticated:
client_config = {
"newVersionAvailable": bool(get_latest_version()),
client_config_inner = {
"version": __version__,
}
else:
client_config = {}
if current_user.has_permission("admin") and current_org.get_setting("beacon_consent") is None:
client_config["showBeaconConsentMessage"] = True
client_config_inner = {}
defaults = {
"allowScriptsInUserInput": settings.ALLOW_SCRIPTS_IN_USER_INPUT,
@@ -284,12 +279,12 @@ def client_config():
"tableCellMaxJSONSize": settings.TABLE_CELL_MAX_JSON_SIZE,
}
client_config.update(defaults)
client_config.update({"basePath": base_href()})
client_config.update(date_time_format_config())
client_config.update(number_format_config())
client_config_inner.update(defaults)
client_config_inner.update({"basePath": base_href()})
client_config_inner.update(date_time_format_config())
client_config_inner.update(number_format_config())
return client_config
return client_config_inner
def messages():

View File

@@ -5,15 +5,15 @@ from flask import Blueprint, current_app, request
from flask_login import current_user, login_required
from flask_restful import Resource, abort
from sqlalchemy import cast
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.orm.exc import NoResultFound
from sqlalchemy_utils.functions import sort_query
from redash import settings
from redash.authentication import current_org
from redash.models import db
from redash.tasks import record_event as record_event_task
from redash.utils import json_dumps
from redash.utils.query_order import sort_query
routes = Blueprint("redash", __name__, template_folder=settings.fix_assets_path("templates"))
@@ -114,7 +114,7 @@ def json_response(response):
def filter_by_tags(result_set, column):
if request.args.getlist("tags"):
tags = request.args.getlist("tags")
result_set = result_set.filter(cast(column, postgresql.ARRAY(db.Text)).contains(tags))
result_set = result_set.filter(cast(column, ARRAY(db.Text)).contains(tags))
return result_set

View File

@@ -96,7 +96,7 @@ class DashboardListResource(BaseResource):
org=self.current_org,
user=self.current_user,
is_draft=True,
layout="[]",
layout=[],
)
models.db.session.add(dashboard)
models.db.session.commit()

View File

@@ -1,13 +1,12 @@
from flask import g, redirect, render_template, request, url_for
from flask_login import login_user
from wtforms import BooleanField, Form, PasswordField, StringField, validators
from wtforms import Form, PasswordField, StringField, validators
from wtforms.fields.html5 import EmailField
from redash import settings
from redash.authentication.org_resolving import current_org
from redash.handlers.base import routes
from redash.models import Group, Organization, User, db
from redash.tasks.general import subscribe
class SetupForm(Form):
@@ -15,8 +14,6 @@ class SetupForm(Form):
email = EmailField("Email Address", validators=[validators.Email()])
password = PasswordField("Password", validators=[validators.Length(6)])
org_name = StringField("Organization Name", validators=[validators.InputRequired()])
security_notifications = BooleanField()
newsletter = BooleanField()
def create_org(org_name, user_name, email, password):
@@ -57,8 +54,6 @@ def setup():
return redirect("/")
form = SetupForm(request.form)
form.newsletter.data = True
form.security_notifications.data = True
if request.method == "POST" and form.validate():
default_org, user = create_org(form.org_name.data, form.name.data, form.email.data, form.password.data)
@@ -66,10 +61,6 @@ def setup():
g.org = default_org
login_user(user)
# signup to newsletter if needed
if form.newsletter.data or form.security_notifications:
subscribe.delay(form.data)
return redirect(url_for("redash.index", org_slug=None))
return render_template("setup.html", form=form)

View File

@@ -7,7 +7,6 @@ from redash.permissions import (
require_permission,
)
from redash.serializers import serialize_visualization
from redash.utils import json_dumps
class VisualizationListResource(BaseResource):
@@ -18,7 +17,6 @@ class VisualizationListResource(BaseResource):
query = get_object_or_404(models.Query.get_by_id_and_org, kwargs.pop("query_id"), self.current_org)
require_object_modify_permission(query, self.current_user)
kwargs["options"] = json_dumps(kwargs["options"])
kwargs["query_rel"] = query
vis = models.Visualization(**kwargs)
@@ -34,8 +32,6 @@ class VisualizationResource(BaseResource):
require_object_modify_permission(vis.query_rel, self.current_user)
kwargs = request.get_json(force=True)
if "options" in kwargs:
kwargs["options"] = json_dumps(kwargs["options"])
kwargs.pop("id", None)
kwargs.pop("query_id", None)

View File

@@ -1,6 +1,6 @@
import json
import os
import simplejson
from flask import url_for
WEBPACK_MANIFEST_PATH = os.path.join(os.path.dirname(__file__), "../../client/dist/", "asset-manifest.json")
@@ -15,7 +15,7 @@ def configure_webpack(app):
if assets is None or app.debug:
try:
with open(WEBPACK_MANIFEST_PATH) as fp:
assets = simplejson.load(fp)
assets = json.load(fp)
except IOError:
app.logger.exception("Unable to load webpack manifest")
assets = {}

View File

@@ -9,7 +9,6 @@ from redash.permissions import (
view_only,
)
from redash.serializers import serialize_widget
from redash.utils import json_dumps
class WidgetListResource(BaseResource):
@@ -30,7 +29,6 @@ class WidgetListResource(BaseResource):
dashboard = models.Dashboard.get_by_id_and_org(widget_properties.get("dashboard_id"), self.current_org)
require_object_modify_permission(dashboard, self.current_user)
widget_properties["options"] = json_dumps(widget_properties["options"])
widget_properties.pop("id", None)
visualization_id = widget_properties.pop("visualization_id")
@@ -44,7 +42,6 @@ class WidgetListResource(BaseResource):
widget = models.Widget(**widget_properties)
models.db.session.add(widget)
models.db.session.commit()
models.db.session.commit()
return serialize_widget(widget)
@@ -65,7 +62,7 @@ class WidgetResource(BaseResource):
require_object_modify_permission(widget.dashboard, self.current_user)
widget_properties = request.get_json(force=True)
widget.text = widget_properties["text"]
widget.options = json_dumps(widget_properties["options"])
widget.options = widget_properties["options"]
models.db.session.commit()
return serialize_widget(widget)

View File

@@ -6,7 +6,7 @@ import time
import pytz
from sqlalchemy import UniqueConstraint, and_, cast, distinct, func, or_
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import ARRAY, DOUBLE_PRECISION, JSONB
from sqlalchemy.event import listens_for
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import (
@@ -40,14 +40,17 @@ from redash.models.base import (
from redash.models.changes import Change, ChangeTrackingMixin # noqa
from redash.models.mixins import BelongsToOrgMixin, TimestampMixin
from redash.models.organizations import Organization
from redash.models.parameterized_query import ParameterizedQuery
from redash.models.parameterized_query import (
InvalidParameterError,
ParameterizedQuery,
QueryDetachedFromDataSourceError,
)
from redash.models.types import (
Configuration,
EncryptedConfiguration,
JSONText,
MutableDict,
MutableList,
PseudoJSON,
pseudo_json_cast_property,
json_cast_property,
)
from redash.models.users import ( # noqa
AccessPermission,
@@ -80,7 +83,7 @@ from redash.utils.configuration import ConfigurationContainer
logger = logging.getLogger(__name__)
class ScheduledQueriesExecutions(object):
class ScheduledQueriesExecutions:
KEY_NAME = "sq:executed_at"
def __init__(self):
@@ -123,7 +126,10 @@ class DataSource(BelongsToOrgMixin, db.Model):
data_source_groups = db.relationship("DataSourceGroup", back_populates="data_source", cascade="all")
__tablename__ = "data_sources"
__table_args__ = (db.Index("data_sources_org_id_name", "org_id", "name"),)
__table_args__ = (
db.Index("data_sources_org_id_name", "org_id", "name"),
{"extend_existing": True},
)
def __eq__(self, other):
return self.id == other.id
@@ -297,34 +303,11 @@ class DataSourceGroup(db.Model):
view_only = Column(db.Boolean, default=False)
__tablename__ = "data_source_groups"
DESERIALIZED_DATA_ATTR = "_deserialized_data"
class DBPersistence(object):
@property
def data(self):
if self._data is None:
return None
if not hasattr(self, DESERIALIZED_DATA_ATTR):
setattr(self, DESERIALIZED_DATA_ATTR, json_loads(self._data))
return self._deserialized_data
@data.setter
def data(self, data):
if hasattr(self, DESERIALIZED_DATA_ATTR):
delattr(self, DESERIALIZED_DATA_ATTR)
self._data = data
QueryResultPersistence = settings.dynamic_settings.QueryResultPersistence or DBPersistence
__table_args__ = ({"extend_existing": True},)
@generic_repr("id", "org_id", "data_source_id", "query_hash", "runtime", "retrieved_at")
class QueryResult(db.Model, QueryResultPersistence, BelongsToOrgMixin):
class QueryResult(db.Model, BelongsToOrgMixin):
id = primary_key("QueryResult")
org_id = Column(key_type("Organization"), db.ForeignKey("organizations.id"))
org = db.relationship(Organization)
@@ -332,8 +315,8 @@ class QueryResult(db.Model, QueryResultPersistence, BelongsToOrgMixin):
data_source = db.relationship(DataSource, backref=backref("query_results"))
query_hash = Column(db.String(32), index=True)
query_text = Column("query", db.Text)
_data = Column("data", db.Text)
runtime = Column(postgresql.DOUBLE_PRECISION)
data = Column(JSONText, nullable=True)
runtime = Column(DOUBLE_PRECISION)
retrieved_at = Column(db.DateTime(True))
__tablename__ = "query_results"
@@ -474,11 +457,11 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
last_modified_by = db.relationship(User, backref="modified_queries", foreign_keys=[last_modified_by_id])
is_archived = Column(db.Boolean, default=False, index=True)
is_draft = Column(db.Boolean, default=True, index=True)
schedule = Column(MutableDict.as_mutable(PseudoJSON), nullable=True)
interval = pseudo_json_cast_property(db.Integer, "schedule", "interval", default=0)
schedule = Column(MutableDict.as_mutable(JSONB), nullable=True)
interval = json_cast_property(db.Integer, "schedule", "interval", default=0)
schedule_failures = Column(db.Integer, default=0)
visualizations = db.relationship("Visualization", cascade="all, delete-orphan")
options = Column(MutableDict.as_mutable(PseudoJSON), default={})
options = Column(MutableDict.as_mutable(JSONB), default={})
search_vector = Column(
TSVectorType(
"id",
@@ -489,7 +472,7 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
),
nullable=True,
)
tags = Column("tags", MutableList.as_mutable(postgresql.ARRAY(db.Unicode)), nullable=True)
tags = Column("tags", MutableList.as_mutable(ARRAY(db.Unicode)), nullable=True)
query_class = SearchBaseQuery
__tablename__ = "queries"
@@ -525,7 +508,7 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
name="Table",
description="",
type="TABLE",
options="{}",
options={},
)
)
return query
@@ -591,11 +574,12 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
@classmethod
def past_scheduled_queries(cls):
now = utils.utcnow()
queries = Query.query.filter(Query.schedule.isnot(None)).order_by(Query.id)
queries = Query.query.filter(func.jsonb_typeof(Query.schedule) != "null").order_by(Query.id)
return [
query
for query in queries
if query.schedule["until"] is not None
if "until" in query.schedule
and query.schedule["until"] is not None
and pytz.utc.localize(datetime.datetime.strptime(query.schedule["until"], "%Y-%m-%d")) <= now
]
@@ -603,7 +587,7 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
def outdated_queries(cls):
queries = (
Query.query.options(joinedload(Query.latest_query_data).load_only("retrieved_at"))
.filter(Query.schedule.isnot(None))
.filter(func.jsonb_typeof(Query.schedule) != "null")
.order_by(Query.id)
.all()
)
@@ -831,7 +815,20 @@ class Query(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model):
def update_query_hash(self):
should_apply_auto_limit = self.options.get("apply_auto_limit", False) if self.options else False
query_runner = self.data_source.query_runner if self.data_source else BaseQueryRunner({})
self.query_hash = query_runner.gen_query_hash(self.query_text, should_apply_auto_limit)
query_text = self.query_text
parameters_dict = {p["name"]: p.get("value") for p in self.parameters} if self.options else {}
if any(parameters_dict):
try:
query_text = self.parameterized.apply(parameters_dict).query
except InvalidParameterError as e:
logging.info(f"Unable to update hash for query {self.id} because of invalid parameters: {str(e)}")
except QueryDetachedFromDataSourceError as e:
logging.info(
f"Unable to update hash for query {self.id} because of dropdown query {e.query_id} is unattached from datasource"
)
self.query_hash = query_runner.gen_query_hash(query_text, should_apply_auto_limit)
@listens_for(Query, "before_insert")
@@ -936,7 +933,7 @@ class Alert(TimestampMixin, BelongsToOrgMixin, db.Model):
query_rel = db.relationship(Query, backref=backref("alerts", cascade="all"))
user_id = Column(key_type("User"), db.ForeignKey("users.id"))
user = db.relationship(User, backref="alerts")
options = Column(MutableDict.as_mutable(PseudoJSON))
options = Column(MutableDict.as_mutable(JSONB), nullable=True)
state = Column(db.String(255), default=UNKNOWN_STATE)
subscriptions = db.relationship("AlertSubscription", cascade="all, delete-orphan")
last_triggered_at = Column(db.DateTime(True), nullable=True)
@@ -1047,13 +1044,13 @@ class Dashboard(ChangeTrackingMixin, TimestampMixin, BelongsToOrgMixin, db.Model
user_id = Column(key_type("User"), db.ForeignKey("users.id"))
user = db.relationship(User)
# layout is no longer used, but kept so we know how to render old dashboards.
layout = Column(db.Text)
layout = Column(MutableList.as_mutable(JSONB), default=[])
dashboard_filters_enabled = Column(db.Boolean, default=False)
is_archived = Column(db.Boolean, default=False, index=True)
is_draft = Column(db.Boolean, default=True, index=True)
widgets = db.relationship("Widget", backref="dashboard", lazy="dynamic")
tags = Column("tags", MutableList.as_mutable(postgresql.ARRAY(db.Unicode)), nullable=True)
options = Column(MutableDict.as_mutable(postgresql.JSON), server_default="{}", default={})
tags = Column("tags", MutableList.as_mutable(ARRAY(db.Unicode)), nullable=True)
options = Column(MutableDict.as_mutable(JSONB), default={})
__tablename__ = "dashboards"
__mapper_args__ = {"version_id_col": version}
@@ -1166,7 +1163,7 @@ class Visualization(TimestampMixin, BelongsToOrgMixin, db.Model):
query_rel = db.relationship(Query, back_populates="visualizations")
name = Column(db.String(255))
description = Column(db.String(4096), nullable=True)
options = Column(db.Text)
options = Column(MutableDict.as_mutable(JSONB), nullable=True)
__tablename__ = "visualizations"
@@ -1193,7 +1190,7 @@ class Widget(TimestampMixin, BelongsToOrgMixin, db.Model):
visualization = db.relationship(Visualization, backref=backref("widgets", cascade="delete"))
text = Column(db.Text, nullable=True)
width = Column(db.Integer)
options = Column(db.Text)
options = Column(MutableDict.as_mutable(JSONB), default={})
dashboard_id = Column(key_type("Dashboard"), db.ForeignKey("dashboards.id"), index=True)
__tablename__ = "widgets"
@@ -1225,7 +1222,7 @@ class Event(db.Model):
action = Column(db.String(255))
object_type = Column(db.String(255))
object_id = Column(db.String(255), nullable=True)
additional_properties = Column(MutableDict.as_mutable(PseudoJSON), nullable=True, default={})
additional_properties = Column(MutableDict.as_mutable(JSONB), nullable=True, default={})
created_at = Column(db.DateTime(True), default=db.func.now())
__tablename__ = "events"

View File

@@ -1,13 +1,13 @@
import functools
from flask_sqlalchemy import BaseQuery, SQLAlchemy
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import UUID
from sqlalchemy.orm import object_session
from sqlalchemy.pool import NullPool
from sqlalchemy_searchable import SearchQueryMixin, make_searchable, vectorizer
from redash import settings
from redash.utils import json_dumps
from redash.utils import json_dumps, json_loads
class RedashSQLAlchemy(SQLAlchemy):
@@ -28,7 +28,10 @@ class RedashSQLAlchemy(SQLAlchemy):
return options
db = RedashSQLAlchemy(session_options={"expire_on_commit": False})
db = RedashSQLAlchemy(
session_options={"expire_on_commit": False},
engine_options={"json_serializer": json_dumps, "json_deserializer": json_loads},
)
# Make sure the SQLAlchemy mappers are all properly configured first.
# This is required by SQLAlchemy-Searchable as it adds DDL listeners
# on the configuration phase of models.
@@ -50,7 +53,7 @@ def integer_vectorizer(column):
return db.func.cast(column, db.Text)
@vectorizer(postgresql.UUID)
@vectorizer(UUID)
def uuid_vectorizer(column):
return db.func.cast(column, db.Text)
@@ -68,7 +71,7 @@ def gfk_type(cls):
return cls
class GFKBase(object):
class GFKBase:
"""
Compatibility with 'generic foreign key' approach Peewee used.
"""

View File

@@ -1,8 +1,8 @@
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.inspection import inspect
from sqlalchemy_utils.models import generic_repr
from .base import Column, GFKBase, db, key_type, primary_key
from .types import PseudoJSON
@generic_repr("id", "object_type", "object_id", "created_at")
@@ -13,7 +13,7 @@ class Change(GFKBase, db.Model):
object_version = Column(db.Integer, default=0)
user_id = Column(key_type("User"), db.ForeignKey("users.id"))
user = db.relationship("User", backref="changes")
change = Column(PseudoJSON)
change = Column(JSONB)
created_at = Column(db.DateTime(True), default=db.func.now())
__tablename__ = "changes"
@@ -45,7 +45,7 @@ class Change(GFKBase, db.Model):
)
class ChangeTrackingMixin(object):
class ChangeTrackingMixin:
skipped_fields = ("id", "created_at", "updated_at", "version")
_clean_values = None

View File

@@ -3,7 +3,7 @@ from sqlalchemy.event import listens_for
from .base import Column, db
class TimestampMixin(object):
class TimestampMixin:
updated_at = Column(db.DateTime(True), default=db.func.now(), nullable=False)
created_at = Column(db.DateTime(True), default=db.func.now(), nullable=False)
@@ -17,7 +17,7 @@ def timestamp_before_update(mapper, connection, target):
target.updated_at = db.func.now()
class BelongsToOrgMixin(object):
class BelongsToOrgMixin:
@classmethod
def get_by_id_and_org(cls, object_id, org, org_cls=None):
query = cls.query.filter(cls.id == object_id)

View File

@@ -1,3 +1,4 @@
from sqlalchemy.dialects.postgresql import JSONB
from sqlalchemy.orm.attributes import flag_modified
from sqlalchemy_utils.models import generic_repr
@@ -5,7 +6,7 @@ from redash.settings.organization import settings as org_settings
from .base import Column, db, primary_key
from .mixins import TimestampMixin
from .types import MutableDict, PseudoJSON
from .types import MutableDict
from .users import Group, User
@@ -17,7 +18,7 @@ class Organization(TimestampMixin, db.Model):
id = primary_key("Organization")
name = Column(db.String(255))
slug = Column(db.String(255), unique=True)
settings = Column(MutableDict.as_mutable(PseudoJSON))
settings = Column(MutableDict.as_mutable(JSONB), default={})
groups = db.relationship("Group", lazy="dynamic")
events = db.relationship("Event", lazy="dynamic", order_by="desc(Event.created_at)")

View File

@@ -103,7 +103,7 @@ def _is_value_within_options(value, dropdown_options, allow_list=False):
return str(value) in dropdown_options
class ParameterizedQuery(object):
class ParameterizedQuery:
def __init__(self, template, schema=None, org=None):
self.schema = schema or []
self.org = org

View File

@@ -1,25 +1,12 @@
from sqlalchemy import cast
from sqlalchemy.dialects.postgresql import JSON
from sqlalchemy.ext.indexable import index_property
from sqlalchemy.ext.mutable import Mutable
from sqlalchemy.types import TypeDecorator
from sqlalchemy_utils import EncryptedType
from redash.models.base import db
from redash.utils import json_dumps, json_loads
from redash.utils.configuration import ConfigurationContainer
from .base import db
class Configuration(TypeDecorator):
impl = db.Text
def process_bind_param(self, value, dialect):
return value.to_json()
def process_result_value(self, value, dialect):
return ConfigurationContainer.from_json(value)
class EncryptedConfiguration(EncryptedType):
def process_bind_param(self, value, dialect):
@@ -31,8 +18,8 @@ class EncryptedConfiguration(EncryptedType):
)
# XXX replace PseudoJSON and MutableDict with real JSON field
class PseudoJSON(TypeDecorator):
# Utilized for cases when JSON size is bigger than JSONB (255MB) or JSON (10MB) limit
class JSONText(TypeDecorator):
impl = db.Text
def process_bind_param(self, value, dialect):
@@ -107,19 +94,3 @@ class json_cast_property(index_property):
def expr(self, model):
expr = super(json_cast_property, self).expr(model)
return expr.astext.cast(self.cast_type)
class pseudo_json_cast_property(index_property):
"""
A SQLAlchemy index property that is able to cast the
entity attribute as the specified cast type. Useful
for PseudoJSON colums for easier querying/filtering.
"""
def __init__(self, cast_type, *args, **kwargs):
super().__init__(*args, **kwargs)
self.cast_type = cast_type
def expr(self, model):
expr = cast(getattr(model, self.attr_name), JSON)[self.index]
return expr.astext.cast(self.cast_type)

View File

@@ -8,7 +8,7 @@ from operator import or_
from flask import current_app, request_started, url_for
from flask_login import AnonymousUserMixin, UserMixin, current_user
from passlib.apps import custom_app_context as pwd_context
from sqlalchemy.dialects import postgresql
from sqlalchemy.dialects.postgresql import ARRAY, JSONB
from sqlalchemy_utils import EmailType
from sqlalchemy_utils.models import generic_repr
@@ -60,7 +60,7 @@ def init_app(app):
request_started.connect(update_user_active_at, app)
class PermissionsCheckMixin(object):
class PermissionsCheckMixin:
def has_permission(self, permission):
return self.has_permissions((permission,))
@@ -84,14 +84,14 @@ class User(TimestampMixin, db.Model, BelongsToOrgMixin, UserMixin, PermissionsCh
password_hash = Column(db.String(128), nullable=True)
group_ids = Column(
"groups",
MutableList.as_mutable(postgresql.ARRAY(key_type("Group"))),
MutableList.as_mutable(ARRAY(key_type("Group"))),
nullable=True,
)
api_key = Column(db.String(40), default=lambda: generate_token(40), unique=True)
disabled_at = Column(db.DateTime(True), default=None, nullable=True)
details = Column(
MutableDict.as_mutable(postgresql.JSONB),
MutableDict.as_mutable(JSONB),
nullable=True,
server_default="{}",
default={},
@@ -267,7 +267,7 @@ class Group(db.Model, BelongsToOrgMixin):
org = db.relationship("Organization", back_populates="groups")
type = Column(db.String(255), default=REGULAR_GROUP)
name = Column(db.String(100))
permissions = Column(postgresql.ARRAY(db.String(255)), default=DEFAULT_PERMISSIONS)
permissions = Column(ARRAY(db.String(255)), default=DEFAULT_PERMISSIONS)
created_at = Column(db.DateTime(True), default=db.func.now())
__tablename__ = "groups"

View File

@@ -54,7 +54,7 @@ def require_access(obj, user, need_view_only):
abort(403)
class require_permissions(object):
class require_permissions:
def __init__(self, permissions, allow_one=False):
self.permissions = permissions
self.allow_one = allow_one

View File

@@ -9,7 +9,6 @@ from rq.timeouts import JobTimeoutException
from sshtunnel import open_tunnel
from redash import settings, utils
from redash.utils import json_loads
from redash.utils.requests_session import (
UnacceptableAddressException,
requests_or_advocate,
@@ -114,12 +113,13 @@ class NotSupported(Exception):
pass
class BaseQueryRunner(object):
class BaseQueryRunner:
deprecated = False
should_annotate_query = True
noop_query = None
limit_query = " LIMIT 1000"
limit_keywords = ["LIMIT", "OFFSET"]
limit_after_select = False
def __init__(self, configuration):
self.syntax = "sql"
@@ -243,7 +243,7 @@ class BaseQueryRunner(object):
if error is not None:
raise Exception("Failed running query [%s]." % query)
return json_loads(results)["rows"]
return results["rows"]
@classmethod
def to_dict(cls):
@@ -302,10 +302,19 @@ class BaseSQLQueryRunner(BaseQueryRunner):
parsed_query = sqlparse.parse(query)[0]
limit_tokens = sqlparse.parse(self.limit_query)[0].tokens
length = len(parsed_query.tokens)
if parsed_query.tokens[length - 1].ttype == sqlparse.tokens.Punctuation:
parsed_query.tokens[length - 1 : length - 1] = limit_tokens
if not self.limit_after_select:
if parsed_query.tokens[length - 1].ttype == sqlparse.tokens.Punctuation:
parsed_query.tokens[length - 1 : length - 1] = limit_tokens
else:
parsed_query.tokens += limit_tokens
else:
parsed_query.tokens += limit_tokens
for i in range(length - 1, -1, -1):
if parsed_query[i].value.upper() == "SELECT":
index = parsed_query.token_index(parsed_query[i + 1])
parsed_query = sqlparse.sql.Statement(
parsed_query.tokens[:index] + limit_tokens + parsed_query.tokens[index:]
)
break
return str(parsed_query)
def apply_auto_limit(self, query_text, should_apply_auto_limit):

View File

@@ -63,5 +63,8 @@ class AmazonElasticsearchService(ElasticSearch2):
self.auth = AWSV4Sign(cred, region, "es")
def get_auth(self):
return self.auth
register(AmazonElasticsearchService)

View File

@@ -7,7 +7,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -81,12 +80,11 @@ class Arango(BaseQueryRunner):
"rows": result,
}
json_data = json_dumps(data, ignore_nan=True)
error = None
except Exception:
raise
return json_data, error
return data, error
register(Arango)

View File

@@ -12,7 +12,6 @@ from redash.query_runner import (
register,
)
from redash.settings import parse_boolean
from redash.utils import json_dumps, json_loads
logger = logging.getLogger(__name__)
ANNOTATE_QUERY = parse_boolean(os.environ.get("ATHENA_ANNOTATE_QUERY", "true"))
@@ -47,7 +46,7 @@ _TYPE_MAPPINGS = {
}
class SimpleFormatter(object):
class SimpleFormatter:
def format(self, operation, parameters=None):
return operation
@@ -210,7 +209,6 @@ class Athena(BaseQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
for row in results["rows"]:
table_name = "{0}.{1}".format(row["table_schema"], row["table_name"])
if table_name not in schema:
@@ -257,14 +255,13 @@ class Athena(BaseQueryRunner):
},
}
json_data = json_dumps(data, ignore_nan=True)
error = None
except Exception:
if cursor.query_id:
cursor.cancel()
raise
return json_data, error
return data, error
register(Athena)

View File

@@ -13,7 +13,7 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -157,17 +157,16 @@ class AxibaseTSD(BaseQueryRunner):
columns, rows = generate_rows_and_columns(data)
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
error = None
except SQLException as e:
json_data = None
data = None
error = e.content
except (KeyboardInterrupt, InterruptException, JobTimeoutException):
sql.cancel_query(query_id)
raise
return json_data, error
return data, error
def get_schema(self, get_stats=False):
connection = atsd_client.connect_url(

View File

@@ -8,7 +8,7 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
try:
from azure.kusto.data.exceptions import KustoServiceError
@@ -124,16 +124,15 @@ class AzureKusto(BaseQueryRunner):
error = None
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
except KustoServiceError as err:
json_data = None
data = None
try:
error = err.args[1][0]["error"]["@message"]
except (IndexError, KeyError):
error = err.args[1]
return json_data, error
return data, error
def get_schema(self, get_stats=False):
query = ".show database schema as json"
@@ -143,8 +142,6 @@ class AzureKusto(BaseQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
schema_as_json = json_loads(results["rows"][0]["DatabaseSchema"])
tables_list = schema_as_json["Databases"][self.configuration["database"]]["Tables"].values()

View File

@@ -16,7 +16,7 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -100,7 +100,7 @@ class BigQuery(BaseQueryRunner):
def __init__(self, configuration):
super().__init__(configuration)
self.should_annotate_query = configuration["useQueryAnnotation"]
self.should_annotate_query = configuration.get("useQueryAnnotation", False)
@classmethod
def enabled(cls):
@@ -318,7 +318,6 @@ class BigQuery(BaseQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
for row in results["rows"]:
table_name = "{0}.{1}".format(row["table_schema"], row["table_name"])
if table_name not in schema:
@@ -346,9 +345,8 @@ class BigQuery(BaseQueryRunner):
data = self._get_query_result(jobs, query)
error = None
json_data = json_dumps(data, ignore_nan=True)
except apiclient.errors.HttpError as e:
json_data = None
data = None
if e.resp.status in [400, 404]:
error = json_loads(e.content)["error"]["message"]
else:
@@ -363,7 +361,7 @@ class BigQuery(BaseQueryRunner):
raise
return json_data, error
return data, error
register(BigQuery)

View File

@@ -5,7 +5,6 @@ from base64 import b64decode
from tempfile import NamedTemporaryFile
from redash.query_runner import BaseQueryRunner, register
from redash.utils import JSONEncoder, json_dumps, json_loads
logger = logging.getLogger(__name__)
@@ -27,13 +26,6 @@ def generate_ssl_options_dict(protocol, cert_path=None):
return ssl_options
class CassandraJSONEncoder(JSONEncoder):
def default(self, o):
if isinstance(o, sortedset):
return list(o)
return super(CassandraJSONEncoder, self).default(o)
class Cassandra(BaseQueryRunner):
noop_query = "SELECT dateof(now()) FROM system.local"
@@ -41,6 +33,12 @@ class Cassandra(BaseQueryRunner):
def enabled(cls):
return enabled
@classmethod
def custom_json_encoder(cls, dec, o):
if isinstance(o, sortedset):
return list(o)
return None
@classmethod
def configuration_schema(cls):
return {
@@ -86,7 +84,6 @@ class Cassandra(BaseQueryRunner):
select release_version from system.local;
"""
results, error = self.run_query(query, None)
results = json_loads(results)
release_version = results["rows"][0]["release_version"]
query = """
@@ -107,7 +104,6 @@ class Cassandra(BaseQueryRunner):
)
results, error = self.run_query(query, None)
results = json_loads(results)
schema = {}
for row in results["rows"]:
@@ -155,9 +151,8 @@ class Cassandra(BaseQueryRunner):
rows = [dict(zip(column_names, row)) for row in result]
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data, cls=CassandraJSONEncoder)
return json_data, None
return data, None
def _generate_cert_file(self):
cert_encoded_bytes = self.configuration.get("sslCertificateFile", None)

View File

@@ -15,7 +15,6 @@ from redash.query_runner import (
register,
split_sql_statements,
)
from redash.utils import json_dumps, json_loads
logger = logging.getLogger(__name__)
@@ -85,8 +84,6 @@ class ClickHouse(BaseSQLQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
for row in results["rows"]:
table_name = "{}.{}".format(row["database"], row["table"])
@@ -124,7 +121,7 @@ class ClickHouse(BaseSQLQueryRunner):
verify=verify,
)
if r.status_code != 200:
if not r.ok:
raise Exception(r.text)
# In certain situations the response body can be empty even if the query was successful, for example
@@ -132,7 +129,11 @@ class ClickHouse(BaseSQLQueryRunner):
if not r.text:
return {}
return r.json()
response = r.json()
if "exception" in response:
raise Exception(response["exception"])
return response
except requests.RequestException as e:
if e.response:
details = "({}, Status Code: {})".format(e.__class__.__name__, e.response.status_code)
@@ -200,25 +201,24 @@ class ClickHouse(BaseSQLQueryRunner):
queries = split_multi_query(query)
if not queries:
json_data = None
data = None
error = "Query is empty"
return json_data, error
return data, error
try:
# If just one query was given no session is needed
if len(queries) == 1:
results = self._clickhouse_query(queries[0])
data = self._clickhouse_query(queries[0])
else:
# If more than one query was given, a session is needed. Parameter session_check must be false
# for the first query
session_id = "redash_{}".format(uuid4().hex)
results = self._clickhouse_query(queries[0], session_id, session_check=False)
data = self._clickhouse_query(queries[0], session_id, session_check=False)
for query in queries[1:]:
results = self._clickhouse_query(query, session_id, session_check=True)
data = self._clickhouse_query(query, session_id, session_check=True)
data = json_dumps(results)
error = None
except Exception as e:
data = None

View File

@@ -3,7 +3,7 @@ import datetime
import yaml
from redash.query_runner import BaseQueryRunner, register
from redash.utils import json_dumps, parse_human_time
from redash.utils import parse_human_time
try:
import boto3
@@ -121,7 +121,7 @@ class CloudWatch(BaseQueryRunner):
rows, columns = parse_response(results)
return json_dumps({"rows": rows, "columns": columns}), None
return {"rows": rows, "columns": columns}, None
register(CloudWatch)

View File

@@ -4,7 +4,7 @@ import time
import yaml
from redash.query_runner import BaseQueryRunner, register
from redash.utils import json_dumps, parse_human_time
from redash.utils import parse_human_time
try:
import boto3
@@ -146,7 +146,7 @@ class CloudWatchInsights(BaseQueryRunner):
time.sleep(POLL_INTERVAL)
elapsed += POLL_INTERVAL
return json_dumps(data), None
return data, None
register(CloudWatchInsights)

View File

@@ -9,7 +9,6 @@ import logging
from os import environ
from redash.query_runner import BaseQueryRunner
from redash.utils import json_dumps, json_loads
from . import register
@@ -115,7 +114,7 @@ class CorporateMemoryQueryRunner(BaseQueryRunner):
logger.info("results are: {}".format(results))
# Not sure why we do not use the json package here but all other
# query runner do it the same way :-)
sparql_results = json_loads(results)
sparql_results = results
# transform all bindings to redash rows
rows = []
for sparql_row in sparql_results["results"]["bindings"]:
@@ -133,7 +132,7 @@ class CorporateMemoryQueryRunner(BaseQueryRunner):
columns.append({"name": var, "friendly_name": var, "type": "string"})
# Not sure why we do not use the json package here but all other
# query runner do it the same way :-)
return json_dumps({"columns": columns, "rows": rows})
return {"columns": columns, "rows": rows}
@classmethod
def name(cls):

View File

@@ -10,7 +10,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
try:
@@ -155,7 +154,7 @@ class Couchbase(BaseQueryRunner):
rows, columns = parse_results(result.json()["results"])
data = {"columns": columns, "rows": rows}
return json_dumps(data), None
return data, None
@classmethod
def name(cls):

View File

@@ -4,7 +4,6 @@ import logging
import yaml
from redash.query_runner import BaseQueryRunner, NotSupported, register
from redash.utils import json_dumps
from redash.utils.requests_session import (
UnacceptableAddressException,
requests_or_advocate,
@@ -96,19 +95,18 @@ class CSV(BaseQueryRunner):
break
data["rows"] = df[labels].replace({np.nan: None}).to_dict(orient="records")
json_data = json_dumps(data)
error = None
except KeyboardInterrupt:
error = "Query cancelled by user."
json_data = None
data = None
except UnacceptableAddressException:
error = "Can't query private addresses."
json_data = None
data = None
except Exception as e:
error = "Error reading {0}. {1}".format(path, str(e))
json_data = None
data = None
return json_data, error
return data, error
def get_schema(self):
raise NotSupported()

View File

@@ -16,7 +16,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
class Databend(BaseQueryRunner):
@@ -85,11 +84,10 @@ class Databend(BaseQueryRunner):
data = {"columns": columns, "rows": rows}
error = None
json_data = json_dumps(data)
finally:
connection.close()
return json_data, error
return data, error
def get_schema(self, get_stats=False):
query = """
@@ -106,7 +104,6 @@ class Databend(BaseQueryRunner):
self._handle_run_query_error(error)
schema = {}
results = json_loads(results)
for row in results["rows"]:
table_name = "{}.{}".format(row["table_schema"], row["table_name"])
@@ -133,7 +130,6 @@ class Databend(BaseQueryRunner):
self._handle_run_query_error(error)
schema = {}
results = json_loads(results)
for row in results["rows"]:
table_name = "{}.{}".format(row["table_schema"], row["table_name"])

View File

@@ -16,7 +16,6 @@ from redash.query_runner import (
split_sql_statements,
)
from redash.settings import cast_int_or_default
from redash.utils import json_dumps, json_loads
try:
import pyodbc
@@ -115,16 +114,13 @@ class Databricks(BaseSQLQueryRunner):
logger.warning("Truncated result set.")
statsd_client.incr("redash.query_runner.databricks.truncated")
data["truncated"] = True
json_data = json_dumps(data)
error = None
else:
error = None
json_data = json_dumps(
{
"columns": [{"name": "result", "type": TYPE_STRING}],
"rows": [{"result": "No data was returned."}],
}
)
data = {
"columns": [{"name": "result", "type": TYPE_STRING}],
"rows": [{"result": "No data was returned."}],
}
cursor.close()
except pyodbc.Error as e:
@@ -132,9 +128,9 @@ class Databricks(BaseSQLQueryRunner):
error = str(e.args[1])
else:
error = str(e)
json_data = None
data = None
return json_data, error
return data, error
def get_schema(self):
raise NotSupported()
@@ -146,8 +142,6 @@ class Databricks(BaseSQLQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
first_column_name = results["columns"][0]["name"]
return [row[first_column_name] for row in results["rows"]]

View File

@@ -11,7 +11,6 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps, json_loads
logger = logging.getLogger(__name__)
@@ -78,8 +77,6 @@ class DB2(BaseSQLQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
for row in results["rows"]:
if row["TABLE_SCHEMA"] != "public":
table_name = "{}.{}".format(row["TABLE_SCHEMA"], row["TABLE_NAME"])
@@ -130,23 +127,22 @@ class DB2(BaseSQLQueryRunner):
data = {"columns": columns, "rows": rows}
error = None
json_data = json_dumps(data)
else:
error = "Query completed but it returned no data."
json_data = None
data = None
except (select.error, OSError):
error = "Query interrupted. Please retry."
json_data = None
data = None
except ibm_db_dbi.DatabaseError as e:
error = str(e)
json_data = None
data = None
except (KeyboardInterrupt, InterruptException, JobTimeoutException):
connection.cancel()
raise
finally:
connection.close()
return json_data, error
return data, error
register(DB2)

View File

@@ -8,7 +8,6 @@ except ImportError:
enabled = False
from redash.query_runner import BaseQueryRunner, register
from redash.utils import json_dumps
def reduce_item(reduced_item, key, value):
@@ -81,7 +80,7 @@ class Dgraph(BaseQueryRunner):
client_stub.close()
def run_query(self, query, user):
json_data = None
data = None
error = None
try:
@@ -109,12 +108,10 @@ class Dgraph(BaseQueryRunner):
# finally, assemble both the columns and data
data = {"columns": columns, "rows": processed_data}
json_data = json_dumps(data)
except Exception as e:
error = e
return json_data, error
return data, error
def get_schema(self, get_stats=False):
"""Queries Dgraph for all the predicates, their types, their tokenizers, etc.

View File

@@ -13,7 +13,6 @@ from redash.query_runner import (
guess_type,
register,
)
from redash.utils import json_dumps, json_loads
logger = logging.getLogger(__name__)
@@ -98,9 +97,7 @@ class Drill(BaseHTTPQueryRunner):
if error is not None:
return None, error
results = parse_response(response.json())
return json_dumps(results), None
return parse_response(response.json()), None
def get_schema(self, get_stats=False):
query = """
@@ -132,8 +129,6 @@ class Drill(BaseHTTPQueryRunner):
if error is not None:
self._handle_run_query_error(error)
results = json_loads(results)
schema = {}
for row in results["rows"]:

View File

@@ -12,7 +12,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
TYPES_MAP = {1: TYPE_STRING, 2: TYPE_INTEGER, 3: TYPE_BOOLEAN}
@@ -59,12 +58,10 @@ class Druid(BaseQueryRunner):
data = {"columns": columns, "rows": rows}
error = None
json_data = json_dumps(data)
print(json_data)
finally:
connection.close()
return json_data, error
return data, error
def get_schema(self, get_stats=False):
query = """
@@ -81,7 +78,6 @@ class Druid(BaseQueryRunner):
self._handle_run_query_error(error)
schema = {}
results = json_loads(results)
for row in results["rows"]:
table_name = "{}.{}".format(row["TABLE_SCHEMA"], row["TABLE_NAME"])

View File

@@ -19,7 +19,6 @@ try:
except ImportError:
enabled = False
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -106,18 +105,17 @@ class e6data(BaseQueryRunner):
columns.append({"name": column_name, "type": column_type})
rows = [dict(zip([c["name"] for c in columns], r)) for r in results]
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
error = None
except Exception as error:
logger.debug(error)
json_data = None
data = None
finally:
if cursor is not None:
cursor.clear()
cursor.close()
return json_data, error
return data, error
def test_connection(self):
self.noop_query = "SELECT 1"

View File

@@ -16,7 +16,7 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
try:
import http.client as http_client
@@ -129,6 +129,8 @@ class BaseElasticSearch(BaseQueryRunner):
for index_name in mappings_data:
index_mappings = mappings_data[index_name]
for m in index_mappings.get("mappings", {}):
if not isinstance(index_mappings["mappings"][m], dict):
continue
if "properties" not in index_mappings["mappings"][m]:
continue
for property_name in index_mappings["mappings"][m]["properties"]:
@@ -406,18 +408,18 @@ class Kibana(BaseElasticSearch):
# TODO: Handle complete ElasticSearch queries (JSON based sent over HTTP POST)
raise Exception("Advanced queries are not supported")
json_data = json_dumps({"columns": result_columns, "rows": result_rows})
data = {"columns": result_columns, "rows": result_rows}
except requests.HTTPError as e:
logger.exception(e)
r = e.response
error = "Failed to execute query. Return Code: {0} Reason: {1}".format(r.status_code, r.text)
json_data = None
data = None
except requests.exceptions.RequestException as e:
logger.exception(e)
error = "Connection refused"
json_data = None
data = None
return json_data, error
return data, error
class ElasticSearch(BaseElasticSearch):
@@ -460,20 +462,20 @@ class ElasticSearch(BaseElasticSearch):
result_rows = []
self._parse_results(mappings, result_fields, r.json(), result_columns, result_rows)
json_data = json_dumps({"columns": result_columns, "rows": result_rows})
data = {"columns": result_columns, "rows": result_rows}
except (KeyboardInterrupt, JobTimeoutException) as e:
logger.exception(e)
raise
except requests.HTTPError as e:
logger.exception(e)
error = "Failed to execute query. Return Code: {0} Reason: {1}".format(r.status_code, r.text)
json_data = None
data = None
except requests.exceptions.RequestException as e:
logger.exception(e)
error = "Connection refused"
json_data = None
data = None
return json_data, error
return data, error
register(Kibana)

View File

@@ -1,3 +1,4 @@
import json
import logging
from typing import Optional, Tuple
@@ -10,7 +11,6 @@ from redash.query_runner import (
BaseHTTPQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
logger = logging.getLogger(__name__)
@@ -46,7 +46,7 @@ class ElasticSearch2(BaseHTTPQueryRunner):
self.syntax = "json"
def get_response(self, url, auth=None, http_method="get", **kwargs):
url = "{}{}".format(self.configuration["url"], url)
url = "{}{}".format(self.configuration["server"], url)
headers = kwargs.pop("headers", {})
headers["Accept"] = "application/json"
return super().get_response(url, auth, http_method, headers=headers, **kwargs)
@@ -62,11 +62,10 @@ class ElasticSearch2(BaseHTTPQueryRunner):
query_results = response.json()
data = self._parse_results(result_fields, query_results)
error = None
json_data = json_dumps(data)
return json_data, error
return data, error
def _build_query(self, query: str) -> Tuple[dict, str, Optional[list]]:
query = json_loads(query)
query = json.loads(query)
index_name = query.pop("index", "")
result_fields = query.pop("result_fields", None)
url = "/{}/_search".format(index_name)

View File

@@ -9,7 +9,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
def _exasol_type_mapper(val, data_type):
@@ -109,14 +108,13 @@ class Exasol(BaseQueryRunner):
rows = [dict(zip(cnames, row)) for row in statement]
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
finally:
if statement is not None:
statement.close()
connection.close()
return json_data, error
return data, error
def get_schema(self, get_stats=False):
query = """

View File

@@ -3,7 +3,6 @@ import logging
import yaml
from redash.query_runner import BaseQueryRunner, NotSupported, register
from redash.utils import json_dumps
from redash.utils.requests_session import (
UnacceptableAddressException,
requests_or_advocate,
@@ -94,19 +93,18 @@ class Excel(BaseQueryRunner):
break
data["rows"] = df[labels].replace({np.nan: None}).to_dict(orient="records")
json_data = json_dumps(data)
error = None
except KeyboardInterrupt:
error = "Query cancelled by user."
json_data = None
data = None
except UnacceptableAddressException:
error = "Can't query private addresses."
json_data = None
data = None
except Exception as e:
error = "Error reading {0}. {1}".format(path, str(e))
json_data = None
data = None
return json_data, error
return data, error
def get_schema(self):
raise NotSupported()

View File

@@ -12,7 +12,7 @@ from redash.query_runner import (
BaseSQLQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -180,15 +180,14 @@ class GoogleAnalytics(BaseSQLQueryRunner):
response = api.get(**params).execute()
data = parse_ga_response(response)
error = None
json_data = json_dumps(data)
except HttpError as e:
# Make sure we return a more readable error to the end user
error = e._get_reason()
json_data = None
data = None
else:
error = "Wrong query format."
json_data = None
return json_data, error
data = None
return data, error
register(GoogleAnalytics)

View File

@@ -13,7 +13,7 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -160,9 +160,8 @@ class GoogleAnalytics4(BaseQueryRunner):
data = parse_ga_response(raw_result)
error = None
json_data = json_dumps(data)
return json_data, error
return data, error
def test_connection(self):
try:

View File

@@ -11,7 +11,7 @@ from redash.query_runner import (
BaseSQLQueryRunner,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -151,15 +151,14 @@ class GoogleSearchConsole(BaseSQLQueryRunner):
response = api.searchanalytics().query(siteUrl=site_url, body=params).execute()
data = parse_ga_response(response, params["dimensions"])
error = None
json_data = json_dumps(data)
except HttpError as e:
# Make sure we return a more readable error to the end user
error = e._get_reason()
json_data = None
data = None
else:
error = "Wrong query format."
json_data = None
return json_data, error
data = None
return data, error
register(GoogleSearchConsole)

View File

@@ -16,7 +16,7 @@ from redash.query_runner import (
guess_type,
register,
)
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
logger = logging.getLogger(__name__)
@@ -257,7 +257,7 @@ class GoogleSpreadsheet(BaseQueryRunner):
data = parse_spreadsheet(SpreadsheetWrapper(spreadsheet), worksheet_num_or_title)
return json_dumps(data), None
return data, None
except gspread.SpreadsheetNotFound:
return (
None,

View File

@@ -10,7 +10,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -35,8 +34,7 @@ def _transform_result(response):
}
)
data = {"columns": columns, "rows": rows}
return json_dumps(data)
return {"columns": columns, "rows": rows}
class Graphite(BaseQueryRunner):

View File

@@ -12,7 +12,6 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -139,7 +138,6 @@ class Hive(BaseSQLQueryRunner):
rows = [dict(zip(column_names, row)) for row in cursor]
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
error = None
except (KeyboardInterrupt, JobTimeoutException):
if connection:
@@ -150,12 +148,12 @@ class Hive(BaseSQLQueryRunner):
error = e.args[0].status.errorMessage
except AttributeError:
error = str(e)
json_data = None
data = None
finally:
if connection:
connection.close()
return json_data, error
return data, error
class HiveHttp(Hive):

View File

@@ -12,7 +12,6 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps, json_loads
ignite_available = importlib.util.find_spec("pyignite") is not None
gridgain_available = importlib.util.find_spec("pygridgain") is not None
@@ -81,8 +80,6 @@ class Ignite(BaseSQLQueryRunner):
if error is not None:
raise Exception("Failed getting schema.")
results = json_loads(results)
for row in results["rows"]:
if row["SCHEMA_NAME"] != self.configuration.get("schema", "PUBLIC"):
table_name = "{}.{}".format(row["SCHEMA_NAME"], row["TABLE_NAME"])
@@ -160,8 +157,8 @@ class Ignite(BaseSQLQueryRunner):
)
logger.debug("Ignite running query: %s", query)
data = self._parse_results(cursor)
json_data = json_dumps({"columns": data[0], "rows": data[1]})
result = self._parse_results(cursor)
data = {"columns": result[0], "rows": result[1]}
error = None
except (KeyboardInterrupt, JobTimeoutException):
@@ -171,7 +168,7 @@ class Ignite(BaseSQLQueryRunner):
if connection:
connection.close()
return json_data, error
return data, error
register(Ignite)

View File

@@ -10,7 +10,6 @@ from redash.query_runner import (
JobTimeoutException,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -120,14 +119,13 @@ class Impala(BaseSQLQueryRunner):
rows = [dict(zip(column_names, row)) for row in cursor]
data = {"columns": columns, "rows": rows}
json_data = json_dumps(data)
error = None
cursor.close()
except DatabaseError as e:
json_data = None
data = None
error = str(e)
except RPCError as e:
json_data = None
data = None
error = "Metastore Error [%s]" % str(e)
except (KeyboardInterrupt, JobTimeoutException):
connection.cancel()
@@ -136,7 +134,7 @@ class Impala(BaseSQLQueryRunner):
if connection:
connection.close()
return json_data, error
return data, error
register(Impala)

View File

@@ -7,7 +7,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
logger = logging.getLogger(__name__)
@@ -64,7 +63,7 @@ def _transform_result(results):
else:
result_columns = [{"name": c, "type": TYPE_STRING} for c in column_names]
return json_dumps({"columns": result_columns, "rows": result_rows})
return {"columns": result_columns, "rows": result_rows}
class InfluxDB(BaseQueryRunner):

View File

@@ -13,7 +13,6 @@ from redash.query_runner import (
BaseQueryRunner,
register,
)
from redash.utils import json_dumps
try:
from influxdb_client import InfluxDBClient
@@ -188,7 +187,7 @@ class InfluxDBv2(BaseQueryRunner):
2. element: An error message, if an error occured. None, if no
error occurred.
"""
json_data = None
data = None
error = None
try:
@@ -204,14 +203,12 @@ class InfluxDBv2(BaseQueryRunner):
tables = client.query_api().query(query)
data = self._get_data_from_tables(tables)
json_data = json_dumps(data)
except Exception as ex:
error = str(ex)
finally:
self._cleanup_cert_files(influx_kwargs)
return json_data, error
return data, error
register(InfluxDBv2)

View File

@@ -2,11 +2,11 @@ import re
from collections import OrderedDict
from redash.query_runner import TYPE_STRING, BaseHTTPQueryRunner, register
from redash.utils import json_dumps, json_loads
from redash.utils import json_loads
# TODO: make this more general and move into __init__.py
class ResultSet(object):
class ResultSet:
def __init__(self):
self.columns = OrderedDict()
self.rows = []
@@ -26,7 +26,7 @@ class ResultSet(object):
}
def to_json(self):
return json_dumps({"rows": self.rows, "columns": list(self.columns.values())})
return {"rows": self.rows, "columns": list(self.columns.values())}
def merge(self, set):
self.rows = self.rows + set.rows

Some files were not shown because too many files have changed in this diff Show More