Compare commits

...

167 Commits

Author SHA1 Message Date
Roman Acevedo
4b0ba8ace0 chore: bump com.gradleup.shadow from 8.3.9 to 9.0.1 2025-08-18 12:06:17 +02:00
Roman Acevedo
73cf7f04fb test(e2e): make sure used docker image is local 2025-08-18 12:03:44 +02:00
Roman Acevedo
ac0ab7e8fa Revert "build(deps): bump com.gradleup.shadow from 8.3.9 to 9.0.1"
This reverts commit fa6da9bd0b.
2025-08-18 12:03:44 +02:00
Roman Acevedo
c1876e69ed test(e2e): print logs if backend failed to start 2025-08-18 12:03:44 +02:00
Roman Acevedo
cf73a80f2e test(e2e): fix e2e marked as cancelled when near timeout 2025-08-18 12:03:44 +02:00
Barthélémy Ledoux
53687f4a1f fix(core): avoid triggering hundreds of reactivity updates for each icon (#10766) 2025-08-18 11:37:37 +02:00
Florian Hussonnois
749bf94125 fix(core): fix preconditions rendering for ExecutionOutputs (#10651)
Ensure that preconditions are always re-rendered for any
new executions

Changes:
* add new fluent skipCache methods on RunContextProperty and Property
  classes

Fix: #10651
2025-08-18 09:24:58 +02:00
Nicolas K.
25a7994f63 fix(test): disable kafka concurrency queue test (#10755)
Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-14 16:59:21 +02:00
Anna Geller
e03c894f3a fix: spelling 2025-08-14 15:27:09 +02:00
Piyush Bhaskar
99772c1a48 fix(ui): fixes logo cut off on no permission interface (#10739) 2025-08-14 18:43:28 +05:30
Roman Acevedo
93d6b816bf fix(tests): namespace binding was breaking filtering in Flow page
fixes https://github.com/kestra-io/kestra-ee/issues/4691

the additional namespace binding in Tabs was added in PR https://github.com/kestra-io/kestra/pull/10543 to solve the special case of Namespace creation
2025-08-14 13:39:42 +02:00
Nicolas K.
a3b0512bec feat(storages): #10636 add tenant id to mock trigger (#10749)
Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-14 12:07:21 +02:00
Loïc Mathieu
265f72b629 fix(execution): parallel flowable may not ends all child flowable
Parallel flowable tasks like `Parallel`, `Dag` and `ForEach` are racy. When a task fail in a branch, other concurrent branches that have flowable may never ends.
We make sure that all children are terminated when a flowable is itself terminated.

Fixes #6780
2025-08-14 12:06:15 +02:00
YannC
07a8d9a665 fix: avoid file being displayed as diff in namespace file editor (#10746)
close #10744
2025-08-14 10:38:33 +02:00
Piyush Bhaskar
59bd607db2 refactor(misc): add misc module to override (#10737) 2025-08-14 13:48:29 +05:30
Nicolas K.
1618815df4 Feat/add get path without tenant (#10741)
* feat(storages): #10636 add get path without tenant id

* feat(storages): #10636 remove first / from get path method

---------

Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-13 17:48:03 +02:00
Nicolas K.
a2c3799ab7 feat(storages): #10636 add get path without tenant id (#10740)
Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-13 16:51:09 +02:00
Loïc Mathieu
986a2b4d11 chore(ci): don't run docker PR image workflow on forks 2025-08-13 15:32:41 +02:00
Loïc Mathieu
cdd591dab7 fix(tests): makes JdbcQueueTest less flaky 2025-08-13 14:56:39 +02:00
Malaydewangan09
9f5cf5aeb9 fix(): subgroups for better readability 2025-08-13 14:41:47 +05:30
Nicolas K.
cc5f73ae06 wip(storages): add non tenant dependant method to storage interface (#10637)
* wip(storages): add non tenant dependant method to storage interface

* feat(storages): #10636 add instance method to retrieve resources without the tenant id

* fix(stores): #4353 failing unit tests after now that tenant id can't be null

---------

Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-13 11:00:25 +02:00
dependabot[bot]
e461e46a1c build(deps): bump io.micrometer:micrometer-core from 1.15.2 to 1.15.3
Bumps [io.micrometer:micrometer-core](https://github.com/micrometer-metrics/micrometer) from 1.15.2 to 1.15.3.
- [Release notes](https://github.com/micrometer-metrics/micrometer/releases)
- [Commits](https://github.com/micrometer-metrics/micrometer/compare/v1.15.2...v1.15.3)

---
updated-dependencies:
- dependency-name: io.micrometer:micrometer-core
  dependency-version: 1.15.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:52:52 +02:00
dependabot[bot]
fa6da9bd0b build(deps): bump com.gradleup.shadow from 8.3.9 to 9.0.1
Bumps [com.gradleup.shadow](https://github.com/GradleUp/shadow) from 8.3.9 to 9.0.1.
- [Release notes](https://github.com/GradleUp/shadow/releases)
- [Commits](https://github.com/GradleUp/shadow/compare/8.3.9...9.0.1)

---
updated-dependencies:
- dependency-name: com.gradleup.shadow
  dependency-version: 9.0.1
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:52:21 +02:00
dependabot[bot]
3cb6815eac build(deps): bump org.assertj:assertj-core from 3.27.3 to 3.27.4
Bumps [org.assertj:assertj-core](https://github.com/assertj/assertj) from 3.27.3 to 3.27.4.
- [Release notes](https://github.com/assertj/assertj/releases)
- [Commits](https://github.com/assertj/assertj/compare/assertj-build-3.27.3...assertj-build-3.27.4)

---
updated-dependencies:
- dependency-name: org.assertj:assertj-core
  dependency-version: 3.27.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:19:45 +02:00
dependabot[bot]
bde9972b26 build(deps): bump actions/checkout from 4 to 5
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:09:06 +02:00
dependabot[bot]
bc828efec9 build(deps): bump software.amazon.awssdk.crt:aws-crt
Bumps [software.amazon.awssdk.crt:aws-crt](https://github.com/awslabs/aws-crt-java) from 0.38.8 to 0.38.9.
- [Release notes](https://github.com/awslabs/aws-crt-java/releases)
- [Commits](https://github.com/awslabs/aws-crt-java/compare/v0.38.8...v0.38.9)

---
updated-dependencies:
- dependency-name: software.amazon.awssdk.crt:aws-crt
  dependency-version: 0.38.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:07:37 +02:00
dependabot[bot]
c62f503f1a build(deps): bump software.amazon.awssdk:bom from 2.32.16 to 2.32.21
Bumps software.amazon.awssdk:bom from 2.32.16 to 2.32.21.

---
updated-dependencies:
- dependency-name: software.amazon.awssdk:bom
  dependency-version: 2.32.21
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:06:21 +02:00
dependabot[bot]
15a6323122 build(deps): bump flyingSaucerVersion from 9.13.1 to 9.13.2
Bumps `flyingSaucerVersion` from 9.13.1 to 9.13.2.

Updates `org.xhtmlrenderer:flying-saucer-core` from 9.13.1 to 9.13.2
- [Release notes](https://github.com/flyingsaucerproject/flyingsaucer/releases)
- [Changelog](https://github.com/flyingsaucerproject/flyingsaucer/blob/main/CHANGELOG.md)
- [Commits](https://github.com/flyingsaucerproject/flyingsaucer/compare/v9.13.1...v9.13.2)

Updates `org.xhtmlrenderer:flying-saucer-pdf` from 9.13.1 to 9.13.2
- [Release notes](https://github.com/flyingsaucerproject/flyingsaucer/releases)
- [Changelog](https://github.com/flyingsaucerproject/flyingsaucer/blob/main/CHANGELOG.md)
- [Commits](https://github.com/flyingsaucerproject/flyingsaucer/compare/v9.13.1...v9.13.2)

---
updated-dependencies:
- dependency-name: org.xhtmlrenderer:flying-saucer-core
  dependency-version: 9.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
- dependency-name: org.xhtmlrenderer:flying-saucer-pdf
  dependency-version: 9.13.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:05:10 +02:00
dependabot[bot]
21cb7b497d build(deps): bump org.jooq:jooq from 3.20.5 to 3.20.6
Bumps org.jooq:jooq from 3.20.5 to 3.20.6.

---
updated-dependencies:
- dependency-name: org.jooq:jooq
  dependency-version: 3.20.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-13 10:03:54 +02:00
Loïc Mathieu
26cb6ef9ad fix(execution): concurrency limit didn't work with afterExecutions
This is because the execution is never considered fully terminated so concurrency limit is not handled properly.
This should also affect SLA, trigger lock, and other cleaning stuff.

The root issue is that, with a worker task from afterExecution, there are no other update on the execution itself (as it's already terminated) so no execution messages are again processed by the executor.

Because of that, the worker task result message from the afterExecution block is the last message, but unfortunatly as messages from the worker task result have no flow attached, the computation of the final termination is incorrect.
The solution is to load the flow if null inside the executor and the execution is terminated which should only occurs inside afterExecution.

Fixes #10657
Fixes #8459
Fixes #8609
2025-08-13 09:29:46 +02:00
Piyush Bhaskar
95c438515d fix(core): pass viewTypes to initYamlSource (#10704) 2025-08-13 12:32:17 +05:30
Florian Hussonnois
194ae826e5 chore(system): add WorkerJobQueueInterface to properly pass workerId on subscribe 2025-08-12 19:26:31 +02:00
Prayag
31dbecec77 fix(core): Enter key is now validating filter / refreshing data (#9630)
closes #9471

---------

Co-authored-by: brian.mulier <bmmulier@hotmail.fr>
2025-08-12 17:23:10 +02:00
Anna Geller
b39bcce2e8 fix(translation): close https://github.com/kestra-io/kestra/issues/9857 2025-08-12 13:00:14 +02:00
github-actions[bot]
95ac5ce8a7 chore(core): localize to languages other than english (#10697)
Extended localization support by adding translations for multiple languages using English as the base. This enhances accessibility and usability for non-English-speaking users while keeping English as the source reference.

Co-authored-by: GitHub Action <actions@github.com>
2025-08-12 12:54:34 +02:00
Piyush Bhaskar
90f913815d fix(core): fix misc store to access configs. (#10692) 2025-08-12 16:24:17 +05:30
Anna Geller
5944db5cc8 fix: translation for sample prompt (#10696) 2025-08-12 12:51:10 +02:00
Loïc Mathieu
577f813eef fix(executions): SLA monitor should take into account restarted executions 2025-08-12 11:46:58 +02:00
Loïc Mathieu
06a9f13676 fix(executions): concurrency limit exceeded when restarting an execution
Fixes #7880
2025-08-12 11:46:58 +02:00
Loïc Mathieu
1fd6e23f96 feat(flows): Flow SLA out of beta
Part-of: https://github.com/kestra-io/kestra-ee/issues/4555
2025-08-12 11:29:32 +02:00
Piyush Bhaskar
9a32780c8c fix(flow): fixes flow deletion inside actions (#10693) 2025-08-12 14:56:31 +05:30
Nicolas K.
af140baa66 Feat/add filters to repositories (#10629)
* wip(repositories): use query filter in the log repository

* feat(repositories): #10628 refactor query builder engine

* fix(repositories): #10628 add sort to findAsych query

* Update core/src/main/java/io/kestra/core/utils/ListUtils.java

Co-authored-by: Loïc Mathieu <loikeseke@gmail.com>

---------

Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
Co-authored-by: Loïc Mathieu <loikeseke@gmail.com>
2025-08-12 11:17:47 +02:00
Florian Hussonnois
54b0183b95 fix(system): avoid unsupported type error on ServiceType enum 2025-08-12 10:01:30 +02:00
Loïc Mathieu
64de3d5fa8 fix(executions): correctly fail the request when trying to resume an execution with the wrong inputs
Fixes #9959
2025-08-12 09:39:02 +02:00
Piyush Bhaskar
4c17aadb81 fix(ui): more visible color for deafult edge (#10690) 2025-08-12 12:44:20 +05:30
Piyush Bhaskar
bf424fbf53 fix(core): reduce size of code block text and padding (#10689) 2025-08-12 11:46:52 +05:30
brian.mulier
edcdb88559 fix(dashboard): avoid duplicate dashboard calls + properly refresh dashboards on refresh button + don't discard component entirely on refresh 2025-08-11 22:28:19 +02:00
brian.mulier
9a9d0b995a fix(dashboard): properly use time filters in queries
closes kestra-io/kestra-ee#4389
2025-08-11 22:28:19 +02:00
brian-mulier-p
5c5d313fb0 fix(metrics): restore autocompletion on metrics filter (#10688) 2025-08-11 21:08:56 +02:00
Nicolas K.
dfd4d87867 feat(releases): add test jar to meven central deployment (#10675)
Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-11 15:56:51 +02:00
Piyush Bhaskar
367d773a86 fix(flows): enable the save and makes tab dirty when have unsaved changes in no code (#10671) 2025-08-11 18:35:56 +05:30
brian.mulier
c819f15c66 tests(core): add a test to taskrunners to ensure it's working multiple times on the same working directory
part of kestra-io/plugin-ee-kubernetes#45
2025-08-11 14:59:15 +02:00
Loïc Mathieu
673b5c994c feat(flows): add upstream dependencies in flow dependencies
Closes #10638
2025-08-11 12:43:33 +02:00
Loïc Mathieu
2acf37e0e6 fix(executions): properly fail the task if it contains unsupported unicode sequence
This occurs in Postgres using the `\u0000` unicode sequence. Postgres refuse to store any JSONB with this sequence as it has no textual representation.
We now properly detect that and fail the task.

Fixes #10326
2025-08-11 11:53:39 +02:00
Ludovic DEHON
0d7fcbb936 build(core): create a docker image for each pull request (#10644)
relate to kestra-io/kestra#10643
2025-08-09 00:18:28 +02:00
Miloš Paunović
42b01d6951 chore(core): reload number of dependencies on flow save action (#10663)
Closes https://github.com/kestra-io/kestra/issues/10484.
2025-08-08 15:11:41 +02:00
Miloš Paunović
9edfb01920 chore(core): uniform dependency table namespace label (#10655) 2025-08-08 13:14:53 +02:00
Miloš Paunović
7813337f48 fix(core): ensure dependency table updates occur after dom is fully rendered (#10654)
Closes https://github.com/kestra-io/kestra/issues/10639.
2025-08-08 12:52:16 +02:00
Miloš Paunović
ea0342f82a refactor(core): remove revision property from flow nodes in dependency graph (#10650)
Related to https://github.com/kestra-io/kestra/issues/10633.
2025-08-08 12:21:01 +02:00
Piyush Bhaskar
ca8f25108e fix(core): update flow store usage. (#10649) 2025-08-08 11:34:09 +02:00
Miloš Paunović
49b6c331a6 chore(core): amend edge color scheme in execution dependency graph (#10648)
Related to https://github.com/kestra-io/kestra/issues/10639.
2025-08-08 11:29:11 +02:00
Miloš Paunović
e409fb7ac0 chore(core): lower the wheel sensitivity on zooming of dependency graph (#10647)
Relates to https://github.com/kestra-io/kestra/issues/10639.
2025-08-08 10:27:51 +02:00
Miloš Paunović
0b64c29794 fix(flows): properly import pinia store into a dependency graph composable (#10646) 2025-08-08 10:25:58 +02:00
Piyush Bhaskar
c4665460aa fix(flows): copy trigger url propely. (#10645) 2025-08-08 12:57:41 +05:30
Barthélémy Ledoux
5423b6e3a7 refactor: move flow store to pinia (#10620) 2025-08-08 09:04:33 +02:00
Vanshika Kumar
114669e1b5 chore(core): add padding around user image in left sidebar (#10553)
Co-authored-by: Vanshika Kumar <vanshika.kumar-ext@infra.market>
Co-authored-by: Miloš Paunović <paun992@hotmail.com>
2025-08-08 08:34:23 +02:00
Loïc Mathieu
d75f0ced38 fix(executions): allow caching tasks that use the 'workingDir' variable
Fixes #10253
2025-08-07 17:26:24 +02:00
brian.mulier
0a788d8429 fix(core): ensure props with defaults are not marked as required in generated doc 2025-08-07 15:07:00 +02:00
brian.mulier
8c25d1bbd7 fix(core): wrong @NotNull import leading to key not being marked as required
closes #9287
2025-08-07 15:07:00 +02:00
YannC
4e2e8f294f fix: avoid calling nextExecutionDate if value is null when resetting trigger (#10547) 2025-08-07 14:51:27 +02:00
Barthélémy Ledoux
2c34804ce2 fix(core): update necessary node viewer in gradle build (#10624) 2025-08-07 13:38:29 +02:00
Piyush Bhaskar
bab4eef790 refactor(namespace): migrate namespace module to pinia (#10571)
* refactor(namespace): migrate namespace module to pinia

* refactor(namespaces): override the store and fix the test

* fix:  test in good way

* refactor: rename action as ee

* refactor: state and action is different

* refactor:  namespaces store in composition  API and composable to use the common state, actions

* fix: export validate
2025-08-07 16:20:51 +05:30
Miloš Paunović
94aa628ac1 feat(core): implement different graph type for dependencies view (#10240)
Closes https://github.com/kestra-io/kestra/issues/5350.
Closes https://github.com/kestra-io/kestra/issues/10446.
Closes https://github.com/kestra-io/kestra/issues/10563.
Closes https://github.com/kestra-io/kestra-ee/issues/3431.
Closes https://github.com/kestra-io/kestra-ee/issues/4509.

Relates to https://github.com/kestra-io/kestra/issues/10484.
Relates to https://github.com/kestra-io/kestra-ee/issues/3550.
2025-08-07 12:12:12 +02:00
Loïc Mathieu
da180fbc00 chore(system): add a note on MapUtils.nestedToFlattenMap() method 2025-08-07 12:01:31 +02:00
Anna Geller
c7bd592bc7 fix(ai-agent): add prompt suggestion 2025-08-07 10:42:35 +02:00
Florian Hussonnois
693d174960 chore(system): provide a more useful Either utility class
Rewrite and add tests to Either class to be a bit
more useable
2025-08-07 10:31:28 +02:00
Florian Hussonnois
8ee492b9c5 fix(system): fix consumer commit on JDBC queue
Ensure that JDBC queue records are committed to the consumer
after processing. This fixes a rare issue where executions could be blocked after a runner crash.
2025-08-07 10:31:17 +02:00
Loïc Mathieu
d6b8ba34ea chore(system): provide a MapUtils.nestedToFlattenMap() method
It will be used to nest a previously flatten map when needed.
2025-08-07 10:00:13 +02:00
dependabot[bot]
08cc853e00 build(deps): bump software.amazon.awssdk.crt:aws-crt
Bumps [software.amazon.awssdk.crt:aws-crt](https://github.com/awslabs/aws-crt-java) from 0.38.7 to 0.38.8.
- [Release notes](https://github.com/awslabs/aws-crt-java/releases)
- [Commits](https://github.com/awslabs/aws-crt-java/compare/v0.38.7...v0.38.8)

---
updated-dependencies:
- dependency-name: software.amazon.awssdk.crt:aws-crt
  dependency-version: 0.38.8
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-07 09:19:01 +02:00
dependabot[bot]
4f68715483 build(deps): bump org.apache.commons:commons-compress
Bumps [org.apache.commons:commons-compress](https://github.com/apache/commons-compress) from 1.27.1 to 1.28.0.
- [Changelog](https://github.com/apache/commons-compress/blob/master/RELEASE-NOTES.txt)
- [Commits](https://github.com/apache/commons-compress/compare/rel/commons-compress-1.27.1...rel/commons-compress-1.28.0)

---
updated-dependencies:
- dependency-name: org.apache.commons:commons-compress
  dependency-version: 1.28.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-07 09:18:02 +02:00
Karthik D
edde1b6730 fix(core): fixes overflow of outputs content
* fix

* fix

* fix: minor tweaks

* fix: scope the style

---------

Co-authored-by: Piyush-r-bhaskar <impiyush0012@gmail.com>
2025-08-07 12:37:44 +05:30
Biplab Bera
399446f52e feat: disabled the preview button in output tabs for zip files (#10535)
Co-authored-by: Piyush Bhaskar <102078527+Piyush-r-bhaskar@users.noreply.github.com>
2025-08-07 11:56:58 +05:30
Florian Hussonnois
c717890fbc fix(build): fix and enhance release-plugins.sh
Skip gradle release when tag already exists
Check for staging files before commiting
2025-08-06 17:17:50 +02:00
Barthélémy Ledoux
5328b0c574 fix(flows): allow date inputs in playground (#10611) 2025-08-06 15:36:29 +02:00
Barthélémy Ledoux
de14cae1f0 fix(flows): playground only clear highlighted lines on leave task (#10612) 2025-08-06 15:36:17 +02:00
Miloš Paunović
d8a3e703e7 feat(core): add animated edges to topology graph (#10616)
Closes kestra-io/kestra#10614.
2025-08-06 14:49:31 +02:00
dependabot[bot]
90659bc320 build(deps): bump com.azure:azure-sdk-bom from 1.2.36 to 1.2.37
Bumps [com.azure:azure-sdk-bom](https://github.com/azure/azure-sdk-for-java) from 1.2.36 to 1.2.37.
- [Release notes](https://github.com/azure/azure-sdk-for-java/releases)
- [Commits](https://github.com/azure/azure-sdk-for-java/compare/azure-sdk-bom_1.2.36...azure-sdk-bom_1.2.37)

---
updated-dependencies:
- dependency-name: com.azure:azure-sdk-bom
  dependency-version: 1.2.37
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 12:55:33 +02:00
dependabot[bot]
37d1d8856e build(deps): bump software.amazon.awssdk:bom from 2.32.11 to 2.32.16
Bumps software.amazon.awssdk:bom from 2.32.11 to 2.32.16.

---
updated-dependencies:
- dependency-name: software.amazon.awssdk:bom
  dependency-version: 2.32.16
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 11:56:59 +02:00
Florian Hussonnois
93a4eb5cbc build: add plugin-datagen to plugin list 2025-08-06 11:11:46 +02:00
Miloš Paunović
de160c8a2d chore(deps): regular dependency update (#10607)
Performing a weekly round of dependency updates in the NPM ecosystem to keep everything up to date.
2025-08-06 10:20:32 +02:00
dependabot[bot]
28458b59eb build(deps): bump com.mysql:mysql-connector-j from 9.3.0 to 9.4.0
Bumps [com.mysql:mysql-connector-j](https://github.com/mysql/mysql-connector-j) from 9.3.0 to 9.4.0.
- [Changelog](https://github.com/mysql/mysql-connector-j/blob/release/9.x/CHANGES)
- [Commits](https://github.com/mysql/mysql-connector-j/compare/9.3.0...9.4.0)

---
updated-dependencies:
- dependency-name: com.mysql:mysql-connector-j
  dependency-version: 9.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 09:50:39 +02:00
dependabot[bot]
2a256d9505 build(deps): bump org.eclipse.angus:jakarta.mail from 2.0.3 to 2.0.4
Bumps org.eclipse.angus:jakarta.mail from 2.0.3 to 2.0.4.

---
updated-dependencies:
- dependency-name: org.eclipse.angus:jakarta.mail
  dependency-version: 2.0.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 09:50:20 +02:00
dependabot[bot]
9008b21007 build(deps): bump com.google.cloud:libraries-bom from 26.64.0 to 26.65.0
Bumps [com.google.cloud:libraries-bom](https://github.com/googleapis/java-cloud-bom) from 26.64.0 to 26.65.0.
- [Release notes](https://github.com/googleapis/java-cloud-bom/releases)
- [Changelog](https://github.com/googleapis/java-cloud-bom/blob/main/release-please-config.json)
- [Commits](https://github.com/googleapis/java-cloud-bom/compare/v26.64.0...v26.65.0)

---
updated-dependencies:
- dependency-name: com.google.cloud:libraries-bom
  dependency-version: 26.65.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 09:49:35 +02:00
dependabot[bot]
8c13bf6a71 build(deps): bump com.gradleup.shadow from 8.3.8 to 8.3.9
Bumps [com.gradleup.shadow](https://github.com/GradleUp/shadow) from 8.3.8 to 8.3.9.
- [Release notes](https://github.com/GradleUp/shadow/releases)
- [Commits](https://github.com/GradleUp/shadow/compare/8.3.8...8.3.9)

---
updated-dependencies:
- dependency-name: com.gradleup.shadow
  dependency-version: 8.3.9
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 09:49:06 +02:00
dependabot[bot]
43888cc3dd build(deps): bump actions/download-artifact from 4 to 5
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 4 to 5.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-08-06 09:48:47 +02:00
Piyush Bhaskar
c94093d9f6 fix(flows): ensure plugin documentation change on flow switch (#10546)
Co-authored-by: Barthélémy Ledoux <bledoux@kestra.io>
2025-08-05 14:29:36 +05:30
Barthélémy Ledoux
8779dec28a fix(flows): add conditional rendering for restart button based on execution (#10570) 2025-08-05 10:22:13 +02:00
Nicolas K.
41614c3a6e feat(stores): #4353 list all KV for namespace and parent namespaces (#10470)
* feat(stores): #4353 list all KV for namespace and parent namespaces

* feat(stores): #4353 list all KV for namespace and parent namespaces

* feat(stores): #4353 list all KV for namespace and parent namespaces

---------

Co-authored-by: nKwiatkowski <nkwiatkowski@kestra.io>
2025-08-05 09:55:41 +02:00
Barthélémy Ledoux
6b4fdd0688 fix: restore InputForm (#10568) 2025-08-05 09:44:39 +02:00
Loïc Mathieu
0319f3d267 feat(system): set the default number of worker threads to 8x available cpu cores
This is a better default for mixed workloads and provides better tail latency.
This is also what we advise to our customer.
2025-08-05 09:19:14 +02:00
brian.mulier
0b37fe2cb8 fix(namespaces): autocomplete in kv & secrets
related to kestra-io/kestra-ee#4559
2025-08-04 20:29:56 +02:00
brian.mulier
e623dd7729 fix(executions): avoid SSE error in follow execution dependencies
closes #10560
2025-08-04 20:22:32 +02:00
Barthélémy Ledoux
db4f7cb4ff fix(flows)*: load flow for execution needs to be stored most of the time (#10566) 2025-08-04 18:54:01 +02:00
Abhilash T
b14b16db0e fix: Updated InputsForm.vue to clear Radio Button Selection (#9654)
Co-authored-by: Miloš Paunović <paun992@hotmail.com>
Co-authored-by: Bart Ledoux <bledoux@kestra.io>
2025-08-04 16:03:25 +02:00
brian.mulier
77f6cec0e4 fix(executions): restore execution redirect & subflow logs view from parent
closes #10528
closes #10551
2025-08-04 15:46:48 +02:00
Piyush Bhaskar
1748b18d66 chore(core): remove variable and directly assign. (#10554) 2025-08-04 18:45:19 +05:30
Piyush Bhaskar
32f96348c1 fix(core): proper state detection from parsed data (#10527) 2025-08-04 18:41:05 +05:30
Barthélémy Ledoux
07db0a8c80 fix(flows): no-code - when changing type message avoid warning (#10498) 2025-08-04 14:57:28 +02:00
Barthélémy Ledoux
2035fd42c3 refactor: use composition api and ts on revision component (#10529) 2025-08-04 14:56:36 +02:00
Barthélémy Ledoux
2856bf07e8 refactor: move editor from vuex to pinia (#10533)
Co-authored-by: Piyush-r-bhaskar <impiyush0012@gmail.com>
2025-08-04 14:55:55 +02:00
Barthélémy Ledoux
f5327cec33 fix: remove debugging value from playground (#10541) 2025-08-04 14:54:45 +02:00
Anna Geller
42955936b2 fix: demo no longer exists 2025-08-04 14:38:13 +02:00
Miloš Paunović
771b98e023 chore(namespaces): add the needed prop for loading all namespaces inside a selector (#10544) 2025-08-04 12:44:38 +02:00
Miloš Paunović
b80e8487e3 fix(namespaces): amend problems with namespace secrets and kv pairs (#10543)
Closes https://github.com/kestra-io/kestra-ee/issues/4584.
2025-08-04 12:19:52 +02:00
YannC.
f35a0b6d60 fix: add missing webhook releases secrets for github releases 2025-08-01 23:21:27 +02:00
brian.mulier
0c9ed17f1c fix(core): remove icon for inputs in no-code
closes #10520
2025-08-01 16:32:08 +02:00
brian.mulier
7ca20371f8 fix(executions): avoid race condition leading to never-ending follow with non-terminal state 2025-08-01 13:12:14 +02:00
brian.mulier
8ff3454cbd fix(core): ensure instances can read all messages when no consumer group / queue type 2025-08-01 13:12:14 +02:00
Piyush Bhaskar
09593d9fd2 fix(namespaces): fixes loading of additional ns (#10518) 2025-08-01 16:28:01 +05:30
Loïc Mathieu
d3cccf36f0 feat(flow): pull up description to the FlowInterface
This avoid the need to parse the flow for ex by AI to get the description.
2025-08-01 12:43:49 +02:00
Loïc Mathieu
eeb91cd9ed fix(tests): RunContextLoggerTest.secrets(): wrong number of logs in awaitLogs() 2025-08-01 12:41:41 +02:00
Loïc Mathieu
2679b0f067 feat(flows): warn on runnable only properties on non-runnable tasks
Closes #9967
Closes #10500
2025-08-01 12:41:08 +02:00
Piyush Bhaskar
54281864c8 fix(executions): do not rely on monaco to get value (#10515) 2025-08-01 13:23:43 +05:30
Loïc Mathieu
e4f9b11d0c fix(ci): workflow build artifact doesn't need the plugin version 2025-08-01 09:41:48 +02:00
Barthélémy Ledoux
12cef0593c fix(flows): playground need to use ui-libs (#10506) 2025-08-01 09:06:11 +02:00
Piyush Bhaskar
c6cf8f307f fix(flows): route to flow page (#10514) 2025-08-01 12:10:56 +05:30
Piyush Bhaskar
3b4eb55f84 fix(executions): properly handle methods and computed for tabs (#10513) 2025-08-01 12:10:27 +05:30
YannC
d32949985d fix: handle empty flows list in lastExecutions correctly (#10493) 2025-08-01 07:21:00 +02:00
YannC
c051ca2e66 fix(ui): load correctly filters + refresh dashboard on filter change (#10504) 2025-08-01 07:15:46 +02:00
Piyush Bhaskar
93a456963b fix(editor): adjust padding for editor (#10497)
* fix(editor): adjust padding for editor

* fix: make padding 16px
2025-07-31 19:10:46 +05:30
YannC.
9a45f17680 fix(ci): do not run github release on tag 2025-07-31 14:37:51 +02:00
github-actions[bot]
5fb6806d74 chore(core): localize to languages other than english (#10494)
Extended localization support by adding translations for multiple languages using English as the base. This enhances accessibility and usability for non-English-speaking users while keeping English as the source reference.

Co-authored-by: GitHub Action <actions@github.com>
2025-07-31 17:44:10 +05:30
Barthélémy Ledoux
f3cff72edd fix(flows): forget all old taskRunId when a new execution (#10487) 2025-07-31 13:41:57 +02:00
Barthélémy Ledoux
0abc660e7d fix(flows): wait longer for widgets to be rendered (#10485) 2025-07-31 13:41:46 +02:00
Barthélémy Ledoux
f09ca3d92e fix(flows): load flows documentation when coming back to no-code root (#10374) 2025-07-31 13:41:36 +02:00
YannC
9fd778fca1 feat(ui): added http method autocompletion (#10492) 2025-07-31 13:28:59 +02:00
Loïc Mathieu
667af25e1b fix(executions): Don't create outputs from the Subflow task when we didn't wait
As, well, if we didn't wait for the subflow execution, we cannot have access to its outputs.
2025-07-31 13:06:58 +02:00
github-actions[bot]
1b1aed5ff1 chore(core): localize to languages other than english (#10489)
Extended localization support by adding translations for multiple languages using English as the base. This enhances accessibility and usability for non-English-speaking users while keeping English as the source reference.

Co-authored-by: GitHub Action <actions@github.com>
2025-07-31 12:14:37 +02:00
Barthélémy Ledoux
da1bb58199 fix(flows): add the load errors to the flow errors (#10483) 2025-07-31 11:53:43 +02:00
Loïc Mathieu
d3e661f9f8 feat(system): improve performance of computeSchedulable
- Store flowIds in a list to avoid computing the multiple times
- Storeg triggers by ID in a map to avoid iterating the list of triggers for each flow
2025-07-31 11:35:01 +02:00
yuri1969
2126c8815e feat(core): validate URL configuration
Used the `ServerCommandValidator` style.

BREAKING CHANGE: app won't start due invalid `kestra.url`
2025-07-31 11:24:21 +02:00
yuri1969
6cfc5b8799 fix(build): reduce Gradle warnings 2025-07-31 11:21:01 +02:00
Barthélémy Ledoux
16d44034f0 fix(flows): hide executionkind meta in the logs (#10482) 2025-07-31 10:50:34 +02:00
Barthélémy Ledoux
f76e62a4af fix(executions): do not rely on monaco to get value (#10467) 2025-07-31 09:28:33 +02:00
Piyush Bhaskar
f6645da94c fix(core): remove top spacing from no execution page and removing the redundant code (#10445)
Co-authored-by: Miloš Paunović <paun992@hotmail.com>
2025-07-31 12:03:58 +05:30
github-actions[bot]
93b2bbf0d0 chore(core): localize to languages other than english (#10471)
Extended localization support by adding translations for multiple languages using English as the base. This enhances accessibility and usability for non-English-speaking users while keeping English as the source reference.

Co-authored-by: GitHub Action <actions@github.com>
2025-07-31 08:23:08 +02:00
Piyush Bhaskar
9d46e2aece fix(executions): make columns that are not links normal text (#10460)
* fix(executions): make it normal text

* fix(executions): use monospace font only
2025-07-31 10:33:33 +05:30
brian.mulier
133315a2a5 chore(deps): hardcode vue override version 2025-07-30 19:25:50 +02:00
brian.mulier
b96b9bb414 fix(core): avoid follow execution from being discarded too early
closes #10472
closes #7623
2025-07-30 19:25:50 +02:00
Barthélémy Ledoux
9865d8a7dc fix(flows): playground - implement new designs (#10459)
Co-authored-by: brian.mulier <bmmulier@hotmail.fr>
2025-07-30 17:54:46 +02:00
brian-mulier-p
29f22c2f81 fix(core): redesign playground run task button (#10423)
closes #10389
2025-07-30 15:23:33 +02:00
dependabot[bot]
3e69469381 build(deps): bump net.thisptr:jackson-jq from 1.3.0 to 1.4.0
Bumps [net.thisptr:jackson-jq](https://github.com/eiiches/jackson-jq) from 1.3.0 to 1.4.0.
- [Release notes](https://github.com/eiiches/jackson-jq/releases)
- [Commits](https://github.com/eiiches/jackson-jq/compare/1.3.0...1.4.0)

---
updated-dependencies:
- dependency-name: net.thisptr:jackson-jq
  dependency-version: 1.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-30 15:08:39 +02:00
dependabot[bot]
38c24ccf7f build(deps): bump software.amazon.awssdk:bom from 2.32.6 to 2.32.11
Bumps software.amazon.awssdk:bom from 2.32.6 to 2.32.11.

---
updated-dependencies:
- dependency-name: software.amazon.awssdk:bom
  dependency-version: 2.32.11
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-30 15:07:49 +02:00
Loïc Mathieu
12cf41a309 fix(ci): don't publish docker in build-artifact 2025-07-30 14:42:16 +02:00
Malaydewangan09
7b8ea0d885 feat(plugins): add script plugins 2025-07-30 17:27:48 +05:30
Barthélémy Ledoux
cf88bbcb12 fix(flows): playground align restart button button (#10415) 2025-07-30 11:57:24 +02:00
Loïc Mathieu
6abe7f96e7 fix(ci): add missing build artifact job 2025-07-30 11:47:10 +02:00
Loïc Mathieu
e73ac78d8b build(ci): allow downloading the exe from the workflow and not the release
This would allow running the workflow even if the release step fail
2025-07-30 11:23:43 +02:00
François Delbrayelle
b0687eb702 fix(): fix icons 2025-07-30 10:28:10 +02:00
weibo1
85f9070f56 feat: Trigger Initialization Method Performance Optimization 2025-07-30 09:23:48 +02:00
YannC
0a42ab40ec fix(dashboard): pageSize & pageNumber is now correctly pass when fetching a chart (#10413) 2025-07-30 08:45:20 +02:00
Piyush Bhaskar
856d2d1d51 refactor(flows): remove execution chart (#10425) 2025-07-30 11:54:35 +05:30
YannC.
a7d6dbc8a3 feat(ci): allow to run github release ci on dispatch 2025-07-29 15:04:50 +02:00
YannC.
cf82109da6 fix(ci): correctly pass GH token to release workflow 2025-07-29 15:01:36 +02:00
Barthélémy Ledoux
d4168ba424 fix(flows): playground clear current execution when clearExecutions() (#10414) 2025-07-29 14:43:11 +02:00
Loïc Mathieu
46a294f25a chore(version): upgrade to v1.0.0-SNAPSHOT 2025-07-29 14:23:19 +02:00
Loïc Mathieu
a229036d8d chore(version): update to version 'v0.24.0-rc0-SNAPSHOT'. 2025-07-29 14:21:49 +02:00
323 changed files with 8020 additions and 4701 deletions

View File

@@ -2,7 +2,7 @@ name: Auto-Translate UI keys and create PR
on:
schedule:
- cron: "0 9-21 * * *" # Every hour from 9 AM to 9 PM
- cron: "0 9-21/3 * * *" # Every 3 hours from 9 AM to 9 PM
workflow_dispatch:
inputs:
retranslate_modified_keys:
@@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
name: Checkout
with:
fetch-depth: 0

View File

@@ -27,7 +27,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
# We must fetch at least the immediate parents so that if this is
# a pull request then we can checkout the head.

View File

@@ -20,6 +20,15 @@ on:
required: false
type: string
default: "LATEST"
force-download-artifact:
description: 'Force download artifact'
required: false
type: string
default: "true"
options:
- "true"
- "false"
env:
PLUGIN_VERSION: ${{ github.event.inputs.plugin-version != null && github.event.inputs.plugin-version || 'LATEST' }}
jobs:
@@ -30,7 +39,7 @@ jobs:
plugins: ${{ steps.plugins.outputs.plugins }}
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# Get Plugins List
- name: Get Plugins List
@@ -38,9 +47,18 @@ jobs:
id: plugins
with:
plugin-version: ${{ env.PLUGIN_VERSION }}
# ********************************************************************************************************************
# Build
# ********************************************************************************************************************
build-artifacts:
name: Build Artifacts
if: ${{ github.event.inputs.force-download-artifact == 'true' }}
uses: ./.github/workflows/workflow-build-artifacts.yml
docker:
name: Publish Docker
needs: [ plugins ]
needs: [ plugins, build-artifacts ]
runs-on: ubuntu-latest
strategy:
matrix:
@@ -54,7 +72,7 @@ jobs:
packages: python3 python-is-python3 python3-pip curl jattach
python-libs: kestra
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
# Vars
- name: Set image name
@@ -73,14 +91,27 @@ jobs:
else
echo "plugins=${{ matrix.image.plugins }}" >> $GITHUB_OUTPUT
fi
# Download release
- name: Download release
# [workflow_dispatch]
# Download executable from GitHub Release
- name: Artifacts - Download release (workflow_dispatch)
id: download-github-release
if: github.event_name == 'workflow_dispatch' && github.event.inputs.force-download-artifact == 'false'
uses: robinraju/release-downloader@v1.12
with:
tag: ${{steps.vars.outputs.tag}}
fileName: 'kestra-*'
out-file-path: build/executable
# [workflow_call]
# Download executable from artifact
- name: Artifacts - Download executable
if: github.event_name != 'workflow_dispatch' || steps.download-github-release.outcome == 'skipped'
uses: actions/download-artifact@v5
with:
name: exe
path: build/executable
- name: Copy exe to image
run: |
cp build/executable/* docker/app/kestra && chmod +x docker/app/kestra

View File

@@ -19,7 +19,7 @@ on:
default: "no input"
jobs:
check:
timeout-minutes: 10
timeout-minutes: 15
runs-on: ubuntu-latest
env:
GOOGLE_SERVICE_ACCOUNT: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}
@@ -32,7 +32,7 @@ jobs:
password: ${{ github.token }}
- name: Checkout kestra
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
path: kestra

View File

@@ -21,12 +21,12 @@ jobs:
runs-on: ubuntu-latest
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
# Checkout GitHub Actions
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: kestra-io/actions
path: actions

View File

@@ -33,13 +33,13 @@ jobs:
exit 1;
fi
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
path: kestra
# Checkout GitHub Actions
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: kestra-io/actions
path: actions

View File

@@ -43,7 +43,8 @@ jobs:
SONATYPE_GPG_KEYID: ${{ secrets.SONATYPE_GPG_KEYID }}
SONATYPE_GPG_PASSWORD: ${{ secrets.SONATYPE_GPG_PASSWORD }}
SONATYPE_GPG_FILE: ${{ secrets.SONATYPE_GPG_FILE }}
GH_PERSONAL_TOKEN: ${{ secrets.GH_PERSONAL_TOKEN }}
SLACK_RELEASES_WEBHOOK_URL: ${{ secrets.SLACK_RELEASES_WEBHOOK_URL }}
end:
runs-on: ubuntu-latest
needs:

View File

@@ -17,7 +17,7 @@ jobs:
runs-on: ubuntu-latest
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0

View File

@@ -34,7 +34,7 @@ jobs:
fi
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0

View File

@@ -17,12 +17,12 @@ jobs:
runs-on: ubuntu-latest
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
# Checkout GitHub Actions
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: kestra-io/actions
path: actions
@@ -66,12 +66,12 @@ jobs:
actions: read
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
# Checkout GitHub Actions
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: kestra-io/actions
path: actions
@@ -111,12 +111,12 @@ jobs:
actions: read
steps:
# Checkout
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
fetch-depth: 0
# Checkout GitHub Actions
- uses: actions/checkout@v4
- uses: actions/checkout@v5
with:
repository: kestra-io/actions
path: actions

View File

@@ -29,7 +29,7 @@ jobs:
GOOGLE_SERVICE_ACCOUNT: ${{ secrets.GOOGLE_SERVICE_ACCOUNT }}
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
name: Checkout - Current ref
with:
fetch-depth: 0

View File

@@ -1,23 +1,7 @@
name: Build Artifacts
on:
workflow_call:
inputs:
plugin-version:
description: "Kestra version"
default: 'LATEST'
required: true
type: string
outputs:
docker-tag:
value: ${{ jobs.build.outputs.docker-tag }}
description: "The Docker image Tag for Kestra"
docker-artifact-name:
value: ${{ jobs.build.outputs.docker-artifact-name }}
description: "The GitHub artifact containing the Kestra docker image name."
plugins:
value: ${{ jobs.build.outputs.plugins }}
description: "The Kestra plugins list used for the build."
workflow_call: {}
jobs:
build:
@@ -31,7 +15,7 @@ jobs:
PLUGIN_VERSION: ${{ github.event.inputs.plugin-version != null && github.event.inputs.plugin-version || 'LATEST' }}
steps:
- name: Checkout - Current ref
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -82,55 +66,6 @@ jobs:
run: |
cp build/executable/* docker/app/kestra && chmod +x docker/app/kestra
# Docker Tag
- name: Setup - Docker vars
id: vars
shell: bash
run: |
TAG=${GITHUB_REF#refs/*/}
if [[ $TAG = "master" ]]
then
TAG="latest";
elif [[ $TAG = "develop" ]]
then
TAG="develop";
elif [[ $TAG = v* ]]
then
TAG="${TAG}";
else
TAG="build-${{ github.run_id }}";
fi
echo "tag=${TAG}" >> $GITHUB_OUTPUT
echo "artifact=docker-kestra-${TAG}" >> $GITHUB_OUTPUT
# Docker setup
- name: Docker - Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Docker - Fix Qemu
shell: bash
run: |
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes -c yes
- name: Docker - Setup Buildx
uses: docker/setup-buildx-action@v3
# Docker Build
- name: Docker - Build & export image
uses: docker/build-push-action@v6
if: "!startsWith(github.ref, 'refs/tags/v')"
with:
context: .
push: false
file: Dockerfile
tags: |
kestra/kestra:${{ steps.vars.outputs.tag }}
build-args: |
KESTRA_PLUGINS=${{ steps.plugins.outputs.plugins }}
APT_PACKAGES=${{ env.DOCKER_APT_PACKAGES }}
PYTHON_LIBRARIES=${{ env.DOCKER_PYTHON_LIBRARIES }}
outputs: type=docker,dest=/tmp/${{ steps.vars.outputs.artifact }}.tar
# Upload artifacts
- name: Artifacts - Upload JAR
uses: actions/upload-artifact@v4
@@ -143,10 +78,3 @@ jobs:
with:
name: exe
path: build/executable/
- name: Artifacts - Upload Docker
uses: actions/upload-artifact@v4
if: "!startsWith(github.ref, 'refs/tags/v')"
with:
name: ${{ steps.vars.outputs.artifact }}
path: /tmp/${{ steps.vars.outputs.artifact }}.tar

View File

@@ -20,7 +20,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
uses: actions/checkout@v5
- name: Cache Node Modules
id: cache-node-modules

View File

@@ -1,14 +1,17 @@
name: Github - Release
on:
workflow_dispatch:
workflow_call:
secrets:
GH_PERSONAL_TOKEN:
description: "The Github personal token."
required: true
push:
tags:
- '*'
SLACK_RELEASES_WEBHOOK_URL:
description: "The Slack webhook URL."
required: true
jobs:
publish:
@@ -17,14 +20,14 @@ jobs:
steps:
# Check out
- name: Checkout - Repository
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
fetch-depth: 0
submodules: true
# Checkout GitHub Actions
- name: Checkout - Actions
uses: actions/checkout@v4
uses: actions/checkout@v5
with:
repository: kestra-io/actions
sparse-checkout-cone-mode: true
@@ -35,7 +38,7 @@ jobs:
# Download Exec
# Must be done after checkout actions
- name: Artifacts - Download executable
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
if: startsWith(github.ref, 'refs/tags/v')
with:
name: exe

View File

@@ -41,8 +41,6 @@ jobs:
name: Build Artifacts
if: ${{ github.event.inputs.force-download-artifact == 'true' }}
uses: ./.github/workflows/workflow-build-artifacts.yml
with:
plugin-version: ${{ github.event.inputs.plugin-version != null && github.event.inputs.plugin-version || 'LATEST' }}
# ********************************************************************************************************************
# Docker
# ********************************************************************************************************************
@@ -70,7 +68,7 @@ jobs:
python-libraries: kestra
steps:
- name: Checkout - Current ref
uses: actions/checkout@v4
uses: actions/checkout@v5
# Docker setup
- name: Docker - Setup QEMU
@@ -122,7 +120,7 @@ jobs:
# Build Docker Image
- name: Artifacts - Download executable
uses: actions/download-artifact@v4
uses: actions/download-artifact@v5
with:
name: exe
path: build/executable

View File

@@ -25,7 +25,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout - Current ref
uses: actions/checkout@v4
uses: actions/checkout@v5
# Setup build
- name: Setup - Build

View File

@@ -0,0 +1,16 @@
name: Pull Request - Delete Docker
on:
pull_request:
types: [closed]
jobs:
publish:
name: Pull Request - Delete Docker
if: github.repository == github.event.pull_request.head.repo.full_name # prevent running on forks
runs-on: ubuntu-latest
steps:
- uses: dataaxiom/ghcr-cleanup-action@v1
with:
package: kestra-pr
delete-tags: ${{ github.event.pull_request.number }}

View File

@@ -0,0 +1,78 @@
name: Pull Request - Publish Docker
on:
pull_request:
branches:
- develop
jobs:
build-artifacts:
name: Build Artifacts
if: github.repository == github.event.pull_request.head.repo.full_name # prevent running on forks
uses: ./.github/workflows/workflow-build-artifacts.yml
publish:
name: Publish Docker
if: github.repository == github.event.pull_request.head.repo.full_name # prevent running on forks
runs-on: ubuntu-latest
needs: build-artifacts
env:
GITHUB_IMAGE_PATH: "ghcr.io/kestra-io/kestra-pr"
steps:
- name: Checkout - Current ref
uses: actions/checkout@v5
with:
fetch-depth: 0
# Docker setup
- name: Docker - Setup QEMU
uses: docker/setup-qemu-action@v3
- name: Docker - Setup Docker Buildx
uses: docker/setup-buildx-action@v3
# Docker Login
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Build Docker Image
- name: Artifacts - Download executable
uses: actions/download-artifact@v5
with:
name: exe
path: build/executable
- name: Docker - Copy exe to image
shell: bash
run: |
cp build/executable/* docker/app/kestra && chmod +x docker/app/kestra
- name: Docker - Build image
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile.pr
push: true
tags: ${{ env.GITHUB_IMAGE_PATH }}:${{ github.event.pull_request.number }}
platforms: linux/amd64,linux/arm64
# Add comment on pull request
- name: Add comment to PR
uses: actions/github-script@v7
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
script: |
await github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: `**🐋 Docker image**: \`${{ env.GITHUB_IMAGE_PATH }}:${{ github.event.pull_request.number }}\`\n` +
`\n` +
`\`\`\`bash\n` +
`docker run --pull=always --rm -it -p 8080:8080 --user=root -v /var/run/docker.sock:/var/run/docker.sock -v /tmp:/tmp ${{ env.GITHUB_IMAGE_PATH }}:${{ github.event.pull_request.number }} server local\n` +
`\`\`\``
})

View File

@@ -42,12 +42,16 @@ on:
SONATYPE_GPG_FILE:
description: "The Sonatype GPG file."
required: true
GH_PERSONAL_TOKEN:
description: "GH personnal Token."
required: true
SLACK_RELEASES_WEBHOOK_URL:
description: "Slack webhook for releases channel."
required: true
jobs:
build-artifacts:
name: Build - Artifacts
uses: ./.github/workflows/workflow-build-artifacts.yml
with:
plugin-version: ${{ github.event.inputs.plugin-version != null && github.event.inputs.plugin-version || 'LATEST' }}
Docker:
name: Publish Docker
@@ -77,4 +81,5 @@ jobs:
if: startsWith(github.ref, 'refs/tags/v')
uses: ./.github/workflows/workflow-github-release.yml
secrets:
GH_PERSONAL_TOKEN: ${{ secrets.GH_PERSONAL_TOKEN }}
GH_PERSONAL_TOKEN: ${{ secrets.GH_PERSONAL_TOKEN }}
SLACK_RELEASES_WEBHOOK_URL: ${{ secrets.SLACK_RELEASES_WEBHOOK_URL }}

View File

@@ -27,7 +27,7 @@ jobs:
ui: ${{ steps.changes.outputs.ui }}
backend: ${{ steps.changes.outputs.backend }}
steps:
- uses: actions/checkout@v4
- uses: actions/checkout@v5
if: "!startsWith(github.ref, 'refs/tags/v')"
- uses: dorny/paths-filter@v3
if: "!startsWith(github.ref, 'refs/tags/v')"

View File

@@ -19,6 +19,7 @@
#plugin-databricks:io.kestra.plugin:plugin-databricks:LATEST
#plugin-datahub:io.kestra.plugin:plugin-datahub:LATEST
#plugin-dataform:io.kestra.plugin:plugin-dataform:LATEST
#plugin-datagen:io.kestra.plugin:plugin-datagen:LATEST
#plugin-dbt:io.kestra.plugin:plugin-dbt:LATEST
#plugin-debezium:io.kestra.plugin:plugin-debezium-db2:LATEST
#plugin-debezium:io.kestra.plugin:plugin-debezium-mongodb:LATEST
@@ -87,13 +88,18 @@
#plugin-powerbi:io.kestra.plugin:plugin-powerbi:LATEST
#plugin-pulsar:io.kestra.plugin:plugin-pulsar:LATEST
#plugin-redis:io.kestra.plugin:plugin-redis:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-bun:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-deno:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-go:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-groovy:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-jbang:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-julia:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-jython:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-lua:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-nashorn:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-node:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-perl:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-php:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-powershell:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-python:LATEST
#plugin-scripts:io.kestra.plugin:plugin-script-r:LATEST

7
Dockerfile.pr Normal file
View File

@@ -0,0 +1,7 @@
FROM kestra/kestra:develop
USER root
COPY --chown=kestra:kestra docker /
USER kestra

View File

@@ -65,10 +65,6 @@ Kestra is an open-source, event-driven orchestration platform that makes both **
## 🚀 Quick Start
### Try the Live Demo
Try Kestra with our [**Live Demo**](https://demo.kestra.io/ui/login?auto). No installation required!
### Get Started Locally in 5 Minutes
#### Launch Kestra in Docker

View File

@@ -7,7 +7,7 @@ set -e
# run tests on this image
LOCAL_IMAGE_VERSION="local-e2e"
LOCAL_IMAGE_VERSION="local-e2e-$(date +%s)"
echo "Running E2E"
echo "Start time: $(date '+%Y-%m-%d %H:%M:%S')"
@@ -15,6 +15,7 @@ start_time=$(date +%s)
echo ""
echo "Building the image for this current repository"
make clean
make build-docker VERSION=$LOCAL_IMAGE_VERSION
end_time=$(date +%s)
@@ -32,7 +33,7 @@ echo "npm i"
npm i
echo 'sh ./run-e2e-tests.sh --kestra-docker-image-to-test "kestra/kestra:$LOCAL_IMAGE_VERSION"'
sh ./run-e2e-tests.sh --kestra-docker-image-to-test "kestra/kestra:$LOCAL_IMAGE_VERSION"
./run-e2e-tests.sh --kestra-docker-image-to-test "kestra/kestra:$LOCAL_IMAGE_VERSION"
end_time2=$(date +%s)
elapsed2=$(( end_time2 - start_time2 ))

View File

@@ -16,7 +16,7 @@ plugins {
id "java"
id 'java-library'
id "idea"
id "com.gradleup.shadow" version "8.3.8"
id "com.gradleup.shadow" version "9.0.1"
id "application"
// test
@@ -225,14 +225,14 @@ subprojects {
}
testlogger {
theme 'mocha-parallel'
showExceptions true
showFullStackTraces true
showCauses true
slowThreshold 2000
showStandardStreams true
showPassedStandardStreams false
showSkippedStandardStreams true
theme = 'mocha-parallel'
showExceptions = true
showFullStackTraces = true
showCauses = true
slowThreshold = 2000
showStandardStreams = true
showPassedStandardStreams = false
showSkippedStandardStreams = true
}
}
}
@@ -410,7 +410,7 @@ jar {
shadowJar {
archiveClassifier.set(null)
mergeServiceFiles()
zip64 true
zip64 = true
}
distZip.dependsOn shadowJar
@@ -427,8 +427,8 @@ def executableDir = layout.buildDirectory.dir("executable")
def executable = layout.buildDirectory.file("executable/${project.name}-${project.version}").get().asFile
tasks.register('writeExecutableJar') {
group "build"
description "Write an executable jar from shadow jar"
group = "build"
description = "Write an executable jar from shadow jar"
dependsOn = [shadowJar]
final shadowJarFile = tasks.shadowJar.outputs.files.singleFile
@@ -454,8 +454,8 @@ tasks.register('writeExecutableJar') {
}
tasks.register('executableJar', Zip) {
group "build"
description "Zip the executable jar"
group = "build"
description = "Zip the executable jar"
dependsOn = [writeExecutableJar]
archiveFileName = "${project.name}-${project.version}.zip"
@@ -620,6 +620,28 @@ subprojects {subProject ->
}
}
}
if (subProject.name != 'platform' && subProject.name != 'cli') {
// only if a test source set actually exists (avoids empty artifacts)
def hasTests = subProject.extensions.findByName('sourceSets')?.findByName('test') != null
if (hasTests) {
// wire the artifact onto every Maven publication of this subproject
publishing {
publications {
withType(MavenPublication).configureEach { pub ->
// keep the normal java component + sources/javadoc already configured
pub.artifact(subProject.tasks.named('testsJar').get())
}
}
}
// make sure publish tasks build the tests jar first
tasks.matching { it.name.startsWith('publish') }.configureEach {
dependsOn subProject.tasks.named('testsJar')
}
}
}
}
}

View File

@@ -16,6 +16,6 @@ abstract public class AbstractServerCommand extends AbstractCommand implements S
}
protected static int defaultWorkerThread() {
return Runtime.getRuntime().availableProcessors() * 4;
return Runtime.getRuntime().availableProcessors() * 8;
}
}

View File

@@ -48,7 +48,7 @@ public class StandAloneCommand extends AbstractServerCommand {
@CommandLine.Option(names = "--tenant", description = "Tenant identifier, Required to load flows from path with the enterprise edition")
private String tenantId;
@CommandLine.Option(names = {"--worker-thread"}, description = "the number of worker threads, defaults to four times the number of available processors. Set it to 0 to avoid starting a worker.")
@CommandLine.Option(names = {"--worker-thread"}, description = "the number of worker threads, defaults to eight times the number of available processors. Set it to 0 to avoid starting a worker.")
private int workerThread = defaultWorkerThread();
@CommandLine.Option(names = {"--skip-executions"}, split=",", description = "a list of execution identifiers to skip, separated by a coma; for troubleshooting purpose only")

View File

@@ -22,7 +22,7 @@ public class WorkerCommand extends AbstractServerCommand {
@Inject
private ApplicationContext applicationContext;
@Option(names = {"-t", "--thread"}, description = "The max number of worker threads, defaults to four times the number of available processors")
@Option(names = {"-t", "--thread"}, description = "The max number of worker threads, defaults to eight times the number of available processors")
private int thread = defaultWorkerThread();
@Option(names = {"-g", "--worker-group"}, description = "The worker group key, must match the regex [a-zA-Z0-9_-]+ (EE only)")

View File

@@ -122,12 +122,13 @@ public class JsonSchemaGenerator {
if (jsonNode instanceof ObjectNode clazzSchema && clazzSchema.get("required") instanceof ArrayNode requiredPropsNode && clazzSchema.get("properties") instanceof ObjectNode properties) {
List<String> requiredFieldValues = StreamSupport.stream(requiredPropsNode.spliterator(), false)
.map(JsonNode::asText)
.toList();
.collect(Collectors.toList());
properties.fields().forEachRemaining(e -> {
int indexInRequiredArray = requiredFieldValues.indexOf(e.getKey());
if (indexInRequiredArray != -1 && e.getValue() instanceof ObjectNode valueNode && valueNode.has("default")) {
requiredPropsNode.remove(indexInRequiredArray);
requiredFieldValues.remove(indexInRequiredArray);
}
});

View File

@@ -139,6 +139,12 @@ public record QueryFilter(
return List.of(Op.EQUALS, Op.NOT_EQUALS, Op.CONTAINS, Op.STARTS_WITH, Op.ENDS_WITH, Op.IN, Op.NOT_IN);
}
},
EXECUTION_ID("executionId") {
@Override
public List<Op> supportedOp() {
return List.of(Op.EQUALS, Op.NOT_EQUALS, Op.CONTAINS, Op.STARTS_WITH, Op.ENDS_WITH, Op.IN, Op.NOT_IN);
}
},
CHILD_FILTER("childFilter") {
@Override
public List<Op> supportedOp() {
@@ -213,7 +219,7 @@ public record QueryFilter(
@Override
public List<Field> supportedField() {
return List.of(Field.QUERY, Field.SCOPE, Field.NAMESPACE, Field.START_DATE,
Field.END_DATE, Field.FLOW_ID, Field.TRIGGER_ID, Field.MIN_LEVEL
Field.END_DATE, Field.FLOW_ID, Field.TRIGGER_ID, Field.MIN_LEVEL, Field.EXECUTION_ID
);
}
},

View File

@@ -132,7 +132,7 @@ public class Execution implements DeletedInterface, TenantInterface {
* @param labels The Flow labels.
* @return a new {@link Execution}.
*/
public static Execution newExecution(final Flow flow, final List<Label> labels) {
public static Execution newExecution(final FlowInterface flow, final List<Label> labels) {
return newExecution(flow, null, labels, Optional.empty());
}
@@ -1040,6 +1040,16 @@ public class Execution implements DeletedInterface, TenantInterface {
return result;
}
/**
* Find all children of this {@link TaskRun}.
*/
public List<TaskRun> findChildren(TaskRun parentTaskRun) {
return taskRunList.stream()
.filter(taskRun -> parentTaskRun.getId().equals(taskRun.getParentTaskRunId()))
.toList();
}
public List<String> findParentsValues(TaskRun taskRun, boolean withCurrent) {
return (withCurrent ?
Stream.concat(findParents(taskRun).stream(), Stream.of(taskRun)) :

View File

@@ -38,6 +38,8 @@ public abstract class AbstractFlow implements FlowInterface {
@Min(value = 1)
Integer revision;
String description;
@Valid
List<Input<?>> inputs;

View File

@@ -61,13 +61,10 @@ public class Flow extends AbstractFlow implements HasUID {
}
});
String description;
Map<String, Object> variables;
@Valid
@NotEmpty
@Schema(additionalProperties = Schema.AdditionalPropertiesValue.TRUE)
List<Task> tasks;
@@ -125,7 +122,7 @@ public class Flow extends AbstractFlow implements HasUID {
AbstractRetry retry;
@Valid
@PluginProperty(beta = true)
@PluginProperty
List<SLA> sla;
public Stream<String> allTypes() {

View File

@@ -4,6 +4,7 @@ import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.triggers.Trigger;
import io.kestra.core.utils.IdUtils;
import lombok.AllArgsConstructor;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import java.util.Optional;
@@ -57,6 +58,7 @@ public interface FlowId {
@Getter
@AllArgsConstructor
@EqualsAndHashCode
class Default implements FlowId {
private final String tenantId;
private final String namespace;

View File

@@ -31,6 +31,8 @@ public interface FlowInterface extends FlowId, DeletedInterface, TenantInterface
Pattern YAML_REVISION_MATCHER = Pattern.compile("(?m)^revision: \\d+\n?");
String getDescription();
boolean isDisabled();
boolean isDeleted();

View File

@@ -116,7 +116,7 @@ public class State {
}
public Instant maxDate() {
if (this.histories.size() == 0) {
if (this.histories.isEmpty()) {
return Instant.now();
}
@@ -124,7 +124,7 @@ public class State {
}
public Instant minDate() {
if (this.histories.size() == 0) {
if (this.histories.isEmpty()) {
return Instant.now();
}
@@ -173,6 +173,11 @@ public class State {
return this.current.isBreakpoint();
}
@JsonIgnore
public boolean isQueued() {
return this.current.isQueued();
}
@JsonIgnore
public boolean isRetrying() {
return this.current.isRetrying();
@@ -206,6 +211,14 @@ public class State {
return this.histories.get(this.histories.size() - 2).state.isPaused();
}
/**
* Return true if the execution has failed, then was restarted.
* This is to disambiguate between a RESTARTED after PAUSED and RESTARTED after FAILED state.
*/
public boolean failedThenRestarted() {
return this.current == Type.RESTARTED && this.histories.get(this.histories.size() - 2).state.isFailed();
}
@Introspected
public enum Type {
CREATED,
@@ -264,6 +277,10 @@ public class State {
return this == Type.KILLED;
}
public boolean isQueued(){
return this == Type.QUEUED;
}
/**
* @return states that are terminal to an execution
*/

View File

@@ -68,6 +68,19 @@ public class Property<T> {
String getExpression() {
return expression;
}
/**
* Returns a new {@link Property} with no cached rendered value,
* so that the next render will evaluate its original Pebble expression.
* <p>
* The returned property will still cache its rendered result.
* To re-evaluate on a subsequent render, call {@code skipCache()} again.
*
* @return a new {@link Property} without a pre-rendered value
*/
public Property<T> skipCache() {
return Property.ofExpression(expression);
}
/**
* Build a new Property object with a value already set.<br>

View File

@@ -222,6 +222,7 @@ public class Trigger extends TriggerContext implements HasUID {
}
// If trigger is a schedule and execution ended after the next execution date
else if (abstractTrigger instanceof Schedule schedule &&
this.getNextExecutionDate() != null &&
execution.getState().getEndDate().get().isAfter(this.getNextExecutionDate().toInstant())
) {
RecoverMissedSchedules recoverMissedSchedules = Optional.ofNullable(schedule.getRecoverMissedSchedules())

View File

@@ -5,11 +5,9 @@ import io.kestra.core.models.executions.ExecutionKilled;
import io.kestra.core.models.executions.LogEntry;
import io.kestra.core.models.executions.MetricEntry;
import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.models.flows.FlowWithSource;
import io.kestra.core.models.templates.Template;
import io.kestra.core.models.triggers.Trigger;
import io.kestra.core.runners.*;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.models.templates.Template;
public interface QueueFactoryInterface {
String EXECUTION_NAMED = "executionQueue";
@@ -34,7 +32,7 @@ public interface QueueFactoryInterface {
QueueInterface<Executor> executor();
QueueInterface<WorkerJob> workerJob();
WorkerJobQueueInterface workerJob();
QueueInterface<WorkerTaskResult> workerTaskResult();

View File

@@ -27,7 +27,7 @@ public interface QueueInterface<T> extends Closeable, Pauseable {
void delete(String consumerGroup, T message) throws QueueException;
default Runnable receive(Consumer<Either<T, DeserializationException>> consumer) {
return receive((String) null, consumer);
return receive(null, consumer, false);
}
default Runnable receive(String consumerGroup, Consumer<Either<T, DeserializationException>> consumer) {

View File

@@ -0,0 +1,12 @@
package io.kestra.core.queues;
import java.io.Serial;
public class UnsupportedMessageException extends QueueException {
@Serial
private static final long serialVersionUID = 1L;
public UnsupportedMessageException(String message, Throwable cause) {
super(message, cause);
}
}

View File

@@ -0,0 +1,12 @@
package io.kestra.core.queues;
import io.kestra.core.exceptions.DeserializationException;
import io.kestra.core.runners.WorkerJob;
import io.kestra.core.utils.Either;
import java.util.function.Consumer;
public interface WorkerJobQueueInterface extends QueueInterface<WorkerJob> {
Runnable subscribe(String workerId, String workerGroup, Consumer<Either<WorkerJob, DeserializationException>> consumer);
}

View File

@@ -161,7 +161,7 @@ public interface ExecutionRepositoryInterface extends SaveRepositoryInterface<Ex
}
List<Execution> lastExecutions(
@Nullable String tenantId,
String tenantId,
@Nullable List<FlowFilter> flows
);
}

View File

@@ -81,11 +81,7 @@ public interface LogRepositoryInterface extends SaveRepositoryInterface<LogEntry
Flux<LogEntry> findAsync(
@Nullable String tenantId,
@Nullable String namespace,
@Nullable String flowId,
@Nullable String executionId,
@Nullable Level minLevel,
ZonedDateTime startDate
List<QueryFilter> filters
);
Flux<LogEntry> findAllAsync(@Nullable String tenantId);
@@ -98,5 +94,7 @@ public interface LogRepositoryInterface extends SaveRepositoryInterface<LogEntry
void deleteByQuery(String tenantId, String namespace, String flowId, String triggerId);
void deleteByFilters(String tenantId, List<QueryFilter> filters);
int deleteByQuery(String tenantId, String namespace, String flowId, String executionId, List<Level> logLevels, ZonedDateTime startDate, ZonedDateTime endDate);
}

View File

@@ -86,7 +86,7 @@ public class Executor {
public Boolean canBeProcessed() {
return !(this.getException() != null || this.getFlow() == null || this.getFlow() instanceof FlowWithException || this.getFlow().getTasks() == null ||
this.getExecution().isDeleted() || this.getExecution().getState().isPaused() || this.getExecution().getState().isBreakpoint());
this.getExecution().isDeleted() || this.getExecution().getState().isPaused() || this.getExecution().getState().isBreakpoint() || this.getExecution().getState().isQueued());
}
public Executor withFlow(FlowWithSource flow) {

View File

@@ -237,9 +237,9 @@ public class ExecutorService {
try {
state = flowableParent.resolveState(runContext, execution, parentTaskRun);
} catch (Exception e) {
// This will lead to the next task being still executed but at least Kestra will not crash.
// This will lead to the next task being still executed, but at least Kestra will not crash.
// This is the best we can do, Flowable task should not fail, so it's a kind of panic mode.
runContext.logger().error("Unable to resolve state from the Flowable task: " + e.getMessage(), e);
runContext.logger().error("Unable to resolve state from the Flowable task: {}", e.getMessage(), e);
state = Optional.of(State.Type.FAILED);
}
Optional<WorkerTaskResult> endedTask = childWorkerTaskTypeToWorkerTask(
@@ -589,6 +589,23 @@ public class ExecutorService {
list = list.stream().filter(workerTaskResult -> !workerTaskResult.getTaskRun().getId().equals(taskRun.getParentTaskRunId()))
.collect(Collectors.toCollection(ArrayList::new));
}
// If the task is a flowable and its terminated, check that all children are terminated.
// This may not be the case for parallel flowable tasks like Parallel, Dag, ForEach...
// After a fail task, some child flowable may not be correctly terminated.
if (task instanceof FlowableTask<?> && taskRun.getState().isTerminated()) {
List<TaskRun> updated = executor.getExecution().findChildren(taskRun).stream()
.filter(child -> !child.getState().isTerminated())
.map(throwFunction(child -> child.withState(taskRun.getState().getCurrent())))
.toList();
if (!updated.isEmpty()) {
Execution execution = executor.getExecution();
for (TaskRun child : updated) {
execution = execution.withTaskRun(child);
}
executor = executor.withExecution(execution, "handledTerminatedFlowableTasks");
}
}
}
metricRegistry

View File

@@ -4,15 +4,11 @@ import io.kestra.core.exceptions.IllegalVariableEvaluationException;
import io.kestra.core.models.property.Property;
import io.kestra.core.models.tasks.Task;
import io.kestra.core.models.triggers.AbstractTrigger;
import jakarta.validation.ConstraintViolation;
import jakarta.validation.ConstraintViolationException;
import jakarta.validation.Validator;
import lombok.extern.slf4j.Slf4j;
import java.util.Collections;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import static io.kestra.core.utils.Rethrow.throwFunction;
@@ -27,12 +23,19 @@ public class RunContextProperty<T> {
private final RunContext runContext;
private final Task task;
private final AbstractTrigger trigger;
private final boolean skipCache;
RunContextProperty(Property<T> property, RunContext runContext) {
this(property, runContext, false);
}
RunContextProperty(Property<T> property, RunContext runContext, boolean skipCache) {
this.property = property;
this.runContext = runContext;
this.task = ((DefaultRunContext) runContext).getTask();
this.trigger = ((DefaultRunContext) runContext).getTrigger();
this.skipCache = skipCache;
}
private void validate() {
@@ -45,6 +48,19 @@ public class RunContextProperty<T> {
log.trace("Unable to do validation: no task or trigger found");
}
}
/**
* Returns a new {@link RunContextProperty} that will always be rendered by evaluating
* its original Pebble expression, without using any previously cached value.
* <p>
* This ensures that each time the property is rendered, the underlying
* expression is re-evaluated to produce a fresh result.
*
* @return a new {@link Property} that bypasses the cache
*/
public RunContextProperty<T> skipCache() {
return new RunContextProperty<>(this.property, this.runContext, true);
}
/**
* Render a property then convert it to its target type and validate it.<br>
@@ -55,13 +71,13 @@ public class RunContextProperty<T> {
* Warning, due to the caching mechanism, this method is not thread-safe.
*/
public Optional<T> as(Class<T> clazz) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.as(prop, this.runContext, clazz)));
validate();
return as;
}
/**
* Render a property with additional variables, then convert it to its target type and validate it.<br>
*
@@ -71,7 +87,7 @@ public class RunContextProperty<T> {
* Warning, due to the caching mechanism, this method is not thread-safe.
*/
public Optional<T> as(Class<T> clazz, Map<String, Object> variables) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.as(prop, this.runContext, clazz, variables)));
validate();
@@ -89,7 +105,7 @@ public class RunContextProperty<T> {
*/
@SuppressWarnings("unchecked")
public <I> T asList(Class<I> itemClazz) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.asList(prop, this.runContext, itemClazz)))
.orElse((T) Collections.emptyList());
@@ -108,7 +124,7 @@ public class RunContextProperty<T> {
*/
@SuppressWarnings("unchecked")
public <I> T asList(Class<I> itemClazz, Map<String, Object> variables) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.asList(prop, this.runContext, itemClazz, variables)))
.orElse((T) Collections.emptyList());
@@ -127,7 +143,7 @@ public class RunContextProperty<T> {
*/
@SuppressWarnings("unchecked")
public <K,V> T asMap(Class<K> keyClass, Class<V> valueClass) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.asMap(prop, this.runContext, keyClass, valueClass)))
.orElse((T) Collections.emptyMap());
@@ -146,11 +162,15 @@ public class RunContextProperty<T> {
*/
@SuppressWarnings("unchecked")
public <K,V> T asMap(Class<K> keyClass, Class<V> valueClass, Map<String, Object> variables) throws IllegalVariableEvaluationException {
var as = Optional.ofNullable(this.property)
var as = Optional.ofNullable(getProperty())
.map(throwFunction(prop -> Property.asMap(prop, this.runContext, keyClass, valueClass, variables)))
.orElse((T) Collections.emptyMap());
validate();
return as;
}
private Property<T> getProperty() {
return skipCache ? this.property.skipCache() : this.property;
}
}

View File

@@ -85,7 +85,7 @@ public class Worker implements Service, Runnable, AutoCloseable {
@Inject
@Named(QueueFactoryInterface.WORKERJOB_NAMED)
private QueueInterface<WorkerJob> workerJobQueue;
private WorkerJobQueueInterface workerJobQueue;
@Inject
@Named(QueueFactoryInterface.WORKERTASKRESULT_NAMED)
@@ -274,12 +274,11 @@ public class Worker implements Service, Runnable, AutoCloseable {
}
}));
this.receiveCancellations.addFirst(this.workerJobQueue.receive(
this.receiveCancellations.addFirst(this.workerJobQueue.subscribe(
this.id,
this.workerGroup,
Worker.class,
either -> {
pendingJobCount.incrementAndGet();
executorService.execute(() -> {
pendingJobCount.decrementAndGet();
runningJobCount.incrementAndGet();
@@ -764,6 +763,7 @@ public class Worker implements Service, Runnable, AutoCloseable {
workerTask = workerTask.withTaskRun(workerTask.getTaskRun().withState(state));
WorkerTaskResult workerTaskResult = new WorkerTaskResult(workerTask.getTaskRun(), dynamicTaskRuns);
this.workerTaskResultQueue.emit(workerTaskResult);
// upload the cache file, hash may not be present if we didn't succeed in computing it
@@ -796,6 +796,10 @@ public class Worker implements Service, Runnable, AutoCloseable {
// If it's a message too big, we remove the outputs
failed = failed.withOutputs(Variables.empty());
}
if (e instanceof UnsupportedMessageException) {
// we expect the offending char is in the output so we remove it
failed = failed.withOutputs(Variables.empty());
}
WorkerTaskResult workerTaskResult = new WorkerTaskResult(failed);
RunContextLogger contextLogger = runContextLoggerFactory.create(workerTask);
contextLogger.logger().error("Unable to emit the worker task result to the queue: {}", e.getMessage(), e);
@@ -818,7 +822,11 @@ public class Worker implements Service, Runnable, AutoCloseable {
private Optional<String> hashTask(RunContext runContext, Task task) {
try {
var map = JacksonMapper.toMap(task);
var rMap = runContext.render(map);
// If there are task provided variables, rendering the task may fail.
// The best we can do is to add a fake 'workingDir' as it's an often added variables,
// and it should not be part of the task hash.
Map<String, Object> variables = Map.of("workingDir", "workingDir");
var rMap = runContext.render(map, variables);
var json = JacksonMapper.ofJson().writeValueAsBytes(rMap);
MessageDigest digest = MessageDigest.getInstance("SHA-256");
digest.update(json);

View File

@@ -8,6 +8,7 @@ import io.kestra.core.events.CrudEventType;
import io.kestra.core.exceptions.DeserializationException;
import io.kestra.core.exceptions.InternalException;
import io.kestra.core.metrics.MetricRegistry;
import io.kestra.core.models.HasUID;
import io.kestra.core.models.conditions.Condition;
import io.kestra.core.models.conditions.ConditionContext;
import io.kestra.core.models.executions.Execution;
@@ -318,7 +319,7 @@ public abstract class AbstractScheduler implements Scheduler, Service {
}
synchronized (this) { // we need a sync block as we read then update so we should not do it in multiple threads concurrently
List<Trigger> triggers = triggerState.findAllForAllTenants();
Map<String, Trigger> triggers = triggerState.findAllForAllTenants().stream().collect(Collectors.toMap(HasUID::uid, Function.identity()));
flows
.stream()
@@ -328,7 +329,8 @@ public abstract class AbstractScheduler implements Scheduler, Service {
.flatMap(flow -> flow.getTriggers().stream().filter(trigger -> trigger instanceof WorkerTriggerInterface).map(trigger -> new FlowAndTrigger(flow, trigger)))
.distinct()
.forEach(flowAndTrigger -> {
Optional<Trigger> trigger = triggers.stream().filter(t -> t.uid().equals(Trigger.uid(flowAndTrigger.flow(), flowAndTrigger.trigger()))).findFirst(); // must have one or none
String triggerUid = Trigger.uid(flowAndTrigger.flow(), flowAndTrigger.trigger());
Optional<Trigger> trigger = Optional.ofNullable(triggers.get(triggerUid));
if (trigger.isEmpty()) {
RunContext runContext = runContextFactory.of(flowAndTrigger.flow(), flowAndTrigger.trigger());
ConditionContext conditionContext = conditionService.conditionContext(runContext, flowAndTrigger.flow(), null);
@@ -467,9 +469,12 @@ public abstract class AbstractScheduler implements Scheduler, Service {
private List<FlowWithTriggers> computeSchedulable(List<FlowWithSource> flows, List<Trigger> triggerContextsToEvaluate, ScheduleContextInterface scheduleContext) {
List<String> flowToKeep = triggerContextsToEvaluate.stream().map(Trigger::getFlowId).toList();
List<String> flowIds = flows.stream().map(FlowId::uidWithoutRevision).toList();
Map<String, Trigger> triggerById = triggerContextsToEvaluate.stream().collect(Collectors.toMap(HasUID::uid, Function.identity()));
// delete trigger which flow has been deleted
triggerContextsToEvaluate.stream()
.filter(trigger -> !flows.stream().map(FlowId::uidWithoutRevision).toList().contains(FlowId.uid(trigger)))
.filter(trigger -> !flowIds.contains(FlowId.uid(trigger)))
.forEach(trigger -> {
try {
this.triggerState.delete(trigger);
@@ -491,12 +496,8 @@ public abstract class AbstractScheduler implements Scheduler, Service {
.map(abstractTrigger -> {
RunContext runContext = runContextFactory.of(flow, abstractTrigger);
ConditionContext conditionContext = conditionService.conditionContext(runContext, flow, null);
Trigger triggerContext = null;
Trigger lastTrigger = triggerContextsToEvaluate
.stream()
.filter(triggerContextToFind -> triggerContextToFind.uid().equals(Trigger.uid(flow, abstractTrigger)))
.findFirst()
.orElse(null);
Trigger triggerContext;
Trigger lastTrigger = triggerById.get(Trigger.uid(flow, abstractTrigger));
// If a trigger is not found in triggers to evaluate, then we ignore it
if (lastTrigger == null) {
return null;

View File

@@ -1,5 +1,8 @@
package io.kestra.core.server;
import com.fasterxml.jackson.annotation.JsonCreator;
import io.kestra.core.utils.Enums;
/**
* Supported Kestra's service types.
*/
@@ -9,4 +12,14 @@ public enum ServiceType {
SCHEDULER,
WEBSERVER,
WORKER,
INVALID;
@JsonCreator
public static ServiceType fromString(final String value) {
try {
return Enums.getForNameIgnoreCase(value, ServiceType.class, INVALID);
} catch (IllegalArgumentException e) {
return INVALID;
}
}
}

View File

@@ -9,6 +9,7 @@ import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.models.flows.FlowWithException;
import io.kestra.core.models.flows.FlowWithSource;
import io.kestra.core.models.flows.GenericFlow;
import io.kestra.core.models.tasks.RunnableTask;
import io.kestra.core.models.topologies.FlowTopology;
import io.kestra.core.models.triggers.AbstractTrigger;
import io.kestra.core.models.validations.ModelValidator;
@@ -51,7 +52,6 @@ import java.util.stream.StreamSupport;
@Singleton
@Slf4j
public class FlowService {
@Inject
Optional<FlowRepositoryInterface> flowRepository;
@@ -236,6 +236,7 @@ public class FlowService {
}
List<String> warnings = new ArrayList<>(checkValidSubflows(flow, tenantId));
List<io.kestra.plugin.core.trigger.Flow> flowTriggers = ListUtils.emptyOnNull(flow.getTriggers()).stream()
.filter(io.kestra.plugin.core.trigger.Flow.class::isInstance)
.map(io.kestra.plugin.core.trigger.Flow.class::cast)
@@ -246,6 +247,21 @@ public class FlowService {
}
});
// add warning for runnable properties (timeout, workerGroup, taskCache) when used not in a runnable
flow.allTasksWithChilds().forEach(task -> {
if (!(task instanceof RunnableTask<?>)) {
if (task.getTimeout() != null) {
warnings.add("The task '" + task.getId() + "' cannot use the 'timeout' property as it's only relevant for runnable tasks.");
}
if (task.getTaskCache() != null) {
warnings.add("The task '" + task.getId() + "' cannot use the 'taskCache' property as it's only relevant for runnable tasks.");
}
if (task.getWorkerGroup() != null) {
warnings.add("The task '" + task.getId() + "' cannot use the 'workerGroup' property as it's only relevant for runnable tasks.");
}
}
});
return warnings;
}
@@ -531,29 +547,26 @@ public class FlowService {
throw noRepositoryException();
}
List<FlowTopology> flowTopologies = flowTopologyRepository.get().findByFlow(tenant, namespace, id, destinationOnly);
return expandAll ? recursiveFlowTopology(tenant, namespace, id, destinationOnly) : flowTopologies.stream();
return expandAll ? recursiveFlowTopology(new ArrayList<>(), tenant, namespace, id, destinationOnly) : flowTopologyRepository.get().findByFlow(tenant, namespace, id, destinationOnly).stream();
}
private Stream<FlowTopology> recursiveFlowTopology(String tenantId, String namespace, String flowId, boolean destinationOnly) {
private Stream<FlowTopology> recursiveFlowTopology(List<FlowId> flowIds, String tenantId, String namespace, String id, boolean destinationOnly) {
if (flowTopologyRepository.isEmpty()) {
throw noRepositoryException();
}
List<FlowTopology> flowTopologies = flowTopologyRepository.get().findByFlow(tenantId, namespace, flowId, destinationOnly);
List<FlowTopology> subTopologies = flowTopologies.stream()
// filter on destination is not the current node to avoid an infinite loop
.filter(topology -> !(topology.getDestination().getTenantId().equals(tenantId) && topology.getDestination().getNamespace().equals(namespace) && topology.getDestination().getId().equals(flowId)))
.toList();
List<FlowTopology> flowTopologies = flowTopologyRepository.get().findByFlow(tenantId, namespace, id, destinationOnly);
if (subTopologies.isEmpty()) {
FlowId flowId = FlowId.of(tenantId, namespace, id, null);
if (flowIds.contains(flowId)) {
return flowTopologies.stream();
} else {
return Stream.concat(flowTopologies.stream(), subTopologies.stream()
.map(topology -> topology.getDestination())
// recursively fetch child nodes
.flatMap(destination -> recursiveFlowTopology(destination.getTenantId(), destination.getNamespace(), destination.getId(), destinationOnly)));
}
flowIds.add(flowId);
return flowTopologies.stream()
.flatMap(topology -> Stream.of(topology.getDestination(), topology.getSource()))
// recursively fetch child nodes
.flatMap(node -> recursiveFlowTopology(flowIds, node.getTenantId(), node.getNamespace(), node.getId(), destinationOnly));
}
private IllegalStateException noRepositoryException() {

View File

@@ -54,6 +54,18 @@ public interface StorageInterface extends AutoCloseable, Plugin {
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
InputStream get(String tenantId, @Nullable String namespace, URI uri) throws IOException;
/**
* Retrieves an input stream of a instance resource for the given storage URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace of the object (may be null)
* @param uri the URI of the object to retrieve
* @return an InputStream to read the object's contents
* @throws IOException if the object cannot be read
*/
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
InputStream getInstanceResource(@Nullable String namespace, URI uri) throws IOException;
/**
* Retrieves a storage object along with its metadata.
*
@@ -91,6 +103,18 @@ public interface StorageInterface extends AutoCloseable, Plugin {
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
List<FileAttributes> list(String tenantId, @Nullable String namespace, URI uri) throws IOException;
/**
* Lists the attributes of all instance files and instance directories under the given URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the URI to list
* @return a list of file attributes
* @throws IOException if the listing fails
*/
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
List<FileAttributes> listInstanceResource(@Nullable String namespace, URI uri) throws IOException;
/**
* Checks whether the given URI exists in the internal storage.
*
@@ -108,6 +132,23 @@ public interface StorageInterface extends AutoCloseable, Plugin {
}
}
/**
* Checks whether the given URI exists in the instance internal storage.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the URI to check
* @return true if the URI exists, false otherwise
*/
@SuppressWarnings("try")
default boolean existsInstanceResource(@Nullable String namespace, URI uri) {
try (InputStream ignored = getInstanceResource(namespace, uri)) {
return true;
} catch (IOException ieo) {
return false;
}
}
/**
* Retrieves the metadata attributes for the given URI.
*
@@ -120,6 +161,18 @@ public interface StorageInterface extends AutoCloseable, Plugin {
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
FileAttributes getAttributes(String tenantId, @Nullable String namespace, URI uri) throws IOException;
/**
* Retrieves the metadata attributes for the given URI.
* n instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the URI of the object
* @return the file attributes
* @throws IOException if the attributes cannot be retrieved
*/
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
FileAttributes getInstanceAttributes(@Nullable String namespace, URI uri) throws IOException;
/**
* Stores data at the given URI.
*
@@ -148,34 +201,86 @@ public interface StorageInterface extends AutoCloseable, Plugin {
@Retryable(includes = {IOException.class})
URI put(String tenantId, @Nullable String namespace, URI uri, StorageObject storageObject) throws IOException;
/**
* Stores instance data at the given URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the target URI
* @param data the input stream containing the data to store
* @return the URI of the stored object
* @throws IOException if storing fails
*/
@Retryable(includes = {IOException.class})
default URI putInstanceResource(@Nullable String namespace, URI uri, InputStream data) throws IOException {
return this.putInstanceResource(namespace, uri, new StorageObject(null, data));
}
/**
* Stores a instance storage object at the given URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the target URI
* @param storageObject the storage object to store
* @return the URI of the stored object
* @throws IOException if storing fails
*/
@Retryable(includes = {IOException.class})
URI putInstanceResource(@Nullable String namespace, URI uri, StorageObject storageObject) throws IOException;
/**
* Deletes the object at the given URI.
*
* @param tenantId the tenant identifier (may be null for global deletion)
* @param tenantId the tenant identifier
* @param namespace the namespace (may be null)
* @param uri the URI of the object to delete
* @return true if deletion was successful
* @throws IOException if deletion fails
*/
@Retryable(includes = {IOException.class})
boolean delete(@Nullable String tenantId, @Nullable String namespace, URI uri) throws IOException;
boolean delete(String tenantId, @Nullable String namespace, URI uri) throws IOException;
/**
* Deletes the instance object at the given URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace (may be null)
* @param uri the URI of the object to delete
* @return true if deletion was successful
* @throws IOException if deletion fails
*/
@Retryable(includes = {IOException.class})
boolean deleteInstanceResource(@Nullable String namespace, URI uri) throws IOException;
/**
* Creates a new directory at the given URI.
*
* @param tenantId the tenant identifier (optional)
* @param tenantId the tenant identifier
* @param namespace the namespace (optional)
* @param uri the URI of the directory to create
* @return the URI of the created directory
* @throws IOException if creation fails
*/
@Retryable(includes = {IOException.class})
URI createDirectory(@Nullable String tenantId, @Nullable String namespace, URI uri) throws IOException;
URI createDirectory(String tenantId, @Nullable String namespace, URI uri) throws IOException;
/**
* Creates a new instance directory at the given URI.
* An instance resource is a resource stored outside any tenant storage, accessible for the whole instance
*
* @param namespace the namespace
* @param uri the URI of the directory to create
* @return the URI of the created directory
* @throws IOException if creation fails
*/
@Retryable(includes = {IOException.class})
URI createInstanceDirectory(String namespace, URI uri) throws IOException;
/**
* Moves an object from one URI to another.
*
* @param tenantId the tenant identifier (optional)
* @param tenantId the tenant identifier
* @param namespace the namespace (optional)
* @param from the source URI
* @param to the destination URI
@@ -183,7 +288,7 @@ public interface StorageInterface extends AutoCloseable, Plugin {
* @throws IOException if moving fails
*/
@Retryable(includes = {IOException.class}, excludes = {FileNotFoundException.class})
URI move(@Nullable String tenantId, @Nullable String namespace, URI from, URI to) throws IOException;
URI move(String tenantId, @Nullable String namespace, URI from, URI to) throws IOException;
/**
* Deletes all objects that match the given URI prefix.
@@ -226,23 +331,32 @@ public interface StorageInterface extends AutoCloseable, Plugin {
}
/**
* Builds the internal storage path based on tenant ID and URI.
* Builds the internal storage path based on the URI.
*
* @param tenantId the tenant identifier (maybe null)
* @param uri the URI of the object
* @return a normalized internal path
*/
default String getPath(@Nullable String tenantId, URI uri) {
default String getPath(URI uri) {
if (uri == null) {
uri = URI.create("/");
}
parentTraversalGuard(uri);
String path = uri.getPath();
if (tenantId != null) {
path = tenantId + (path.startsWith("/") ? path : "/" + path);
}
path = path.replaceFirst("^/", "");
return path;
}
/**
* Builds the internal storage path based on tenant ID and URI.
*
* @param tenantId the tenant identifier
* @param uri the URI of the object
* @return a normalized internal path
*/
default String getPath(String tenantId, URI uri) {
String path = getPath(uri);
path = tenantId + (path.startsWith("/") ? path : "/" + path);
return path;
}

View File

@@ -1,37 +1,193 @@
package io.kestra.core.utils;
import java.util.NoSuchElementException;
import java.util.Objects;
import java.util.Optional;
import java.util.function.Function;
public class Either<L, R> {
private final Optional<L> left;
private final Optional<R> right;
private Either(Optional<L> left, Optional<R> right) {
this.left = left;
this.right = right;
}
/**
* Simple {@link Either} monad type.
*
* @param <L> the {@link Left} type.
* @param <R> the {@link Right} type.
*/
public abstract sealed class Either<L, R> permits Either.Left, Either.Right {
public static <L, R> Either<L, R> left(L value) {
return new Either<>(Optional.ofNullable(value), Optional.empty());
return new Left<>(value);
}
public boolean isLeft() {
return this.left.isPresent();
}
public L getLeft() {
return this.left.get();
}
public static <L, R> Either<L, R> right(R value) {
return new Either<>(Optional.empty(), Optional.ofNullable(value));
return new Right<>(value);
}
public boolean isRight() {
return this.right.isPresent();
/**
* Returns {@code true} if this is a {@link Left}, {@code false} otherwise.
*/
public abstract boolean isLeft();
/**
* Returns {@code true} if this is a {@link Right}, {@code false} otherwise.
*/
public abstract boolean isRight();
/**
* Returns the left value.
*
* @throws NoSuchElementException if is not left.
*/
public abstract L getLeft();
/**
* Returns the right value.
*
* @throws NoSuchElementException if is not right.
*/
public abstract R getRight();
public LeftProjection<L, R> left() {
return new LeftProjection<>(this);
}
public R getRight() {
return this.right.get();
public RightProjection<L, R> right() {
return new RightProjection<>(this);
}
}
public <T> T fold(final Function<L, T> fl, final Function<R, T> fr) {
return isLeft() ? fl.apply(getLeft()) : fr.apply(getRight());
}
public static final class Left<L, R> extends Either<L, R> {
private final L value;
private Left(L value) {
this.value = value;
}
/**
* @return {@code true}.
*/
@Override
public boolean isLeft() {
return true;
}
/**
* @return {@code false}.
*/
@Override
public boolean isRight() {
return false;
}
@Override
public L getLeft() {
return value;
}
@Override
public R getRight() {
throw new NoSuchElementException("This is Left");
}
}
public static final class Right<L, R> extends Either<L, R> {
private final R value;
private Right(R value) {
this.value = value;
}
/**
* @return {@code false}.
*/
@Override
public boolean isLeft() {
return false;
}
/**
* @return {@code true}.
*/
@Override
public boolean isRight() {
return true;
}
@Override
public L getLeft() {
throw new NoSuchElementException("This is Right");
}
@Override
public R getRight() {
return value;
}
}
public static class LeftProjection<L, R> {
private final Either<L, R> either;
LeftProjection(final Either<L, R> either) {
Objects.requireNonNull(either, "either can't be null");
this.either = either;
}
public boolean exists() {
return either.isLeft();
}
public L get() {
return either.getLeft();
}
public <LL> Either<LL, R> map(final Function<? super L, ? extends LL> fn) {
if (either.isLeft()) return Either.left(fn.apply(either.getLeft()));
else return Either.right(either.getRight());
}
public <LL> Either<LL, R> flatMap(final Function<? super L, Either<LL, R>> fn) {
if (either.isLeft()) return fn.apply(either.getLeft());
else return Either.right(either.getRight());
}
public Optional<L> toOptional() {
return exists() ? Optional.of(either.getLeft()) : Optional.empty();
}
}
public static class RightProjection<L, R> {
private final Either<L, R> either;
RightProjection(final Either<L, R> either) {
Objects.requireNonNull(either, "either can't be null");
this.either = either;
}
public boolean exists() {
return either.isRight();
}
public R get() {
return either.getRight();
}
public <RR> Either<L, RR> map(final Function<? super R, ? extends RR> fn) {
if (either.isRight()) return Either.right(fn.apply(either.getRight()));
else return Either.left(either.getLeft());
}
public <RR> Either<L, RR> flatMap(final Function<? super R, Either<L, RR>> fn) {
if (either.isRight()) return fn.apply(either.getRight());
else return Either.left(either.getLeft());
}
public Optional<R> toOptional() {
return exists() ? Optional.of(either.getRight()) : Optional.empty();
}
}
}

View File

@@ -4,6 +4,7 @@ import jakarta.annotation.Nullable;
import jakarta.validation.constraints.NotNull;
import java.util.Arrays;
import java.util.List;
import java.util.Locale;
import java.util.Map;
import java.util.Optional;
@@ -118,6 +119,25 @@ public final class Enums {
));
}
/**
* Convert an object to a list of a specific enum.
* @param value the object to convert to list of enum.
* @param enumClass the class of the enum to convert to.
* @return A list of the corresponding enum type
* @param <T> The type of the enum.
* @throws IllegalArgumentException If the value does not match any enum value.
*/
public static <T extends Enum<T>> List<T> fromList(Object value, Class<T> enumClass) {
return switch (value) {
case List<?> list when !list.isEmpty() && enumClass.isInstance(list.getFirst()) -> (List<T>) list;
case List<?> list when !list.isEmpty() && list.getFirst() instanceof String ->
list.stream().map(item -> Enum.valueOf(enumClass, item.toString().toUpperCase())).collect(Collectors.toList());
case Enum<?> enumValue when enumClass.isInstance(enumValue) -> List.of(enumClass.cast(enumValue));
case String stringValue -> List.of(Enum.valueOf(enumClass, stringValue.toUpperCase()));
default -> throw new IllegalArgumentException("Field requires a " + enumClass.getSimpleName() + " or List<" + enumClass.getSimpleName() + "> value");
};
}
private Enums() {
}
}

View File

@@ -55,4 +55,20 @@ public class ListUtils {
return newList;
}
public static List<?> convertToList(Object object){
if (object instanceof List<?> list) {
return list;
} else {
throw new IllegalArgumentException("%s in not an instance of List".formatted(object.getClass()));
}
}
public static List<String> convertToListString(Object object){
if (object instanceof List<?> list && (list.isEmpty() || list.getFirst() instanceof String)) {
return (List<String>) list;
} else {
throw new IllegalArgumentException("%s in not an instance of List of String".formatted(object));
}
}
}

View File

@@ -169,7 +169,7 @@ public class MapUtils {
}
/**
* Utility method nested a flattened map.
* Utility method that nests a flattened map.
*
* @param flatMap the flattened map.
* @return the nested map.
@@ -203,4 +203,44 @@ public class MapUtils {
}
return result;
}
/**
* Utility method that flatten a nested map.
* <p>
* NOTE: for simplicity, this method didn't allow to flatten maps with conflicting keys that would end up in different flatten keys,
* this could be related later if needed by flattening {k1: k2: {k3: v1}, k1: {k4: v2}} to {k1.k2.k3: v1, k1.k4: v2} is prohibited for now.
*
* @param nestedMap the nested map.
* @return the flattened map.
*
* @throws IllegalArgumentException if any entry contains a map of more than one element.
*/
public static Map<String, Object> nestedToFlattenMap(@NotNull Map<String, Object> nestedMap) {
Map<String, Object> result = new TreeMap<>();
for (Map.Entry<String, Object> entry : nestedMap.entrySet()) {
if (entry.getValue() instanceof Map<?, ?> map) {
Map.Entry<String, Object> flatten = flattenEntry(entry.getKey(), (Map<String, Object>) map);
result.put(flatten.getKey(), flatten.getValue());
} else {
result.put(entry.getKey(), entry.getValue());
}
}
return result;
}
private static Map.Entry<String, Object> flattenEntry(String key, Map<String, Object> value) {
if (value.size() > 1) {
throw new IllegalArgumentException("You cannot flatten a map with an entry that is a map of more than one element, conflicting key: " + key);
}
Map.Entry<String, Object> entry = value.entrySet().iterator().next();
String newKey = key + "." + entry.getKey();
Object newValue = entry.getValue();
if (newValue instanceof Map<?, ?> map) {
return flattenEntry(newKey, (Map<String, Object>) map);
} else {
return Map.entry(newKey, newValue);
}
}
}

View File

@@ -0,0 +1,77 @@
package io.kestra.core.validations;
import io.micronaut.context.annotation.Context;
import io.micronaut.context.env.Environment;
import jakarta.annotation.PostConstruct;
import jakarta.inject.Inject;
import lombok.extern.slf4j.Slf4j;
import java.io.Serial;
import java.net.MalformedURLException;
import java.net.URI;
import java.net.URL;
import java.util.List;
/**
* Enforces validation rules upon the application configuration.
*/
@Slf4j
@Context
public class AppConfigValidator {
private static final String KESTRA_URL_KEY = "kestra.url";
private final Environment environment;
@Inject
public AppConfigValidator(Environment environment) {
this.environment = environment;
}
@PostConstruct
void validate() {
final List<Boolean> validationResults = List.of(
isKestraUrlValid()
);
if (validationResults.contains(false)) {
throw new AppConfigException("Invalid configuration");
}
}
private boolean isKestraUrlValid() {
if (!environment.containsProperty(KESTRA_URL_KEY)) {
return true;
}
final String rawUrl = environment.getProperty(KESTRA_URL_KEY, String.class).orElseThrow();
final URL url;
try {
url = URI.create(rawUrl).toURL();
} catch (IllegalArgumentException | MalformedURLException e) {
log.error(
"Value of the '{}' configuration property must be a valid URL - e.g. https://your.company.com",
KESTRA_URL_KEY
);
return false;
}
if (!List.of("http", "https").contains(url.getProtocol())) {
log.error(
"Value of the '{}' configuration property must contain either HTTP or HTTPS scheme - e.g. https://your.company.com",
KESTRA_URL_KEY
);
return false;
}
return true;
}
public static class AppConfigException extends RuntimeException {
@Serial
private static final long serialVersionUID = 1L;
public AppConfigException(String errorMessage) {
super(errorMessage);
}
}
}

View File

@@ -54,9 +54,10 @@ public class FlowValidator implements ConstraintValidator<FlowValidation, Flow>
violations.add("Namespace '" + value.getNamespace() + "' does not exist but is required to exist before a flow can be created in it.");
}
List<Task> allTasks = value.allTasksWithChilds();
// tasks unique id
List<String> taskIds = value.allTasksWithChilds()
.stream()
List<String> taskIds = allTasks.stream()
.map(Task::getId)
.toList();
@@ -72,8 +73,8 @@ public class FlowValidator implements ConstraintValidator<FlowValidation, Flow>
violations.add("Duplicate trigger id with name [" + String.join(", ", duplicateIds) + "]");
}
value.allTasksWithChilds()
.stream().filter(task -> task instanceof ExecutableTask<?> executableTask
allTasks.stream()
.filter(task -> task instanceof ExecutableTask<?> executableTask
&& value.getId().equals(executableTask.subflowId().flowId())
&& value.getNamespace().equals(executableTask.subflowId().namespace()))
.forEach(task -> violations.add("Recursive call to flow [" + value.getNamespace() + "." + value.getId() + "]"));
@@ -102,7 +103,7 @@ public class FlowValidator implements ConstraintValidator<FlowValidation, Flow>
.map(input -> Pattern.compile("\\{\\{\\s*inputs." + input.getId() + "\\s*\\}\\}"))
.collect(Collectors.toList());
List<String> invalidTasks = value.allTasks()
List<String> invalidTasks = allTasks.stream()
.filter(task -> checkObjectFieldsWithPatterns(task, inputsWithMinusPatterns))
.map(task -> task.getId())
.collect(Collectors.toList());
@@ -112,12 +113,12 @@ public class FlowValidator implements ConstraintValidator<FlowValidation, Flow>
" [" + String.join(", ", invalidTasks) + "]");
}
List<Pattern> outputsWithMinusPattern = value.allTasks()
List<Pattern> outputsWithMinusPattern = allTasks.stream()
.filter(output -> Optional.ofNullable(output.getId()).orElse("").contains("-"))
.map(output -> Pattern.compile("\\{\\{\\s*outputs\\." + output.getId() + "\\.[^}]+\\s*\\}\\}"))
.collect(Collectors.toList());
invalidTasks = value.allTasks()
invalidTasks = allTasks.stream()
.filter(task -> checkObjectFieldsWithPatterns(task, outputsWithMinusPattern))
.map(task -> task.getId())
.collect(Collectors.toList());

View File

@@ -90,7 +90,7 @@ public class ExecutionOutputs extends Condition implements ScheduleCondition {
private static final String OUTPUTS_VAR = "outputs";
@NotNull
private Property<String> expression;
private Property<Boolean> expression;
/** {@inheritDoc} **/
@SuppressWarnings("unchecked")
@@ -105,9 +105,8 @@ public class ExecutionOutputs extends Condition implements ScheduleCondition {
conditionContext.getVariables(),
Map.of(TRIGGER_VAR, Map.of(OUTPUTS_VAR, conditionContext.getExecution().getOutputs()))
);
String render = conditionContext.getRunContext().render(expression).as(String.class, variables).orElseThrow();
return !(render.isBlank() || render.trim().equals("false"));
return conditionContext.getRunContext().render(expression).skipCache().as(Boolean.class, variables).orElseThrow();
}
private boolean hasNoOutputs(final Execution execution) {

View File

@@ -19,7 +19,6 @@ import lombok.experimental.SuperBuilder;
@NoArgsConstructor
@JsonInclude(JsonInclude.Include.NON_DEFAULT)
@EqualsAndHashCode
//@TriggersDataFilterValidation
@Schema(
title = "Display Execution data in a dashboard chart.",
description = "Execution data can be displayed in charts broken out by Namespace and filtered by State, for example."

View File

@@ -208,48 +208,50 @@ public class Subflow extends Task implements ExecutableTask<Subflow.Output>, Chi
return Optional.empty();
}
boolean isOutputsAllowed = runContext
.<Boolean>pluginConfiguration(PLUGIN_FLOW_OUTPUTS_ENABLED)
.orElse(true);
final Output.OutputBuilder builder = Output.builder()
.executionId(execution.getId())
.state(execution.getState().getCurrent());
final Map<String, Object> subflowOutputs = Optional
.ofNullable(flow.getOutputs())
.map(outputs -> outputs
.stream()
.collect(Collectors.toMap(
io.kestra.core.models.flows.Output::getId,
io.kestra.core.models.flows.Output::getValue)
)
)
.orElseGet(() -> isOutputsAllowed ? this.getOutputs() : null);
VariablesService variablesService = ((DefaultRunContext) runContext).getApplicationContext().getBean(VariablesService.class);
if (subflowOutputs != null) {
try {
Map<String, Object> outputs = runContext.render(subflowOutputs);
FlowInputOutput flowInputOutput = ((DefaultRunContext)runContext).getApplicationContext().getBean(FlowInputOutput.class); // this is hacking
if (flow.getOutputs() != null && flowInputOutput != null) {
outputs = flowInputOutput.typedOutputs(flow, execution, outputs);
}
builder.outputs(outputs);
} catch (Exception e) {
runContext.logger().warn("Failed to extract outputs with the error: '{}'", e.getLocalizedMessage(), e);
var state = State.Type.fail(this);
Variables variables = variablesService.of(StorageContext.forTask(taskRun), builder.build());
taskRun = taskRun
.withState(state)
.withAttempts(Collections.singletonList(TaskRunAttempt.builder().state(new State().withState(state)).build()))
.withOutputs(variables);
if (this.wait) { // we only compute outputs if we wait for the subflow
boolean isOutputsAllowed = runContext
.<Boolean>pluginConfiguration(PLUGIN_FLOW_OUTPUTS_ENABLED)
.orElse(true);
return Optional.of(SubflowExecutionResult.builder()
.executionId(execution.getId())
.state(State.Type.FAILED)
.parentTaskRun(taskRun)
.build());
final Map<String, Object> subflowOutputs = Optional
.ofNullable(flow.getOutputs())
.map(outputs -> outputs
.stream()
.collect(Collectors.toMap(
io.kestra.core.models.flows.Output::getId,
io.kestra.core.models.flows.Output::getValue)
)
)
.orElseGet(() -> isOutputsAllowed ? this.getOutputs() : null);
if (subflowOutputs != null) {
try {
Map<String, Object> outputs = runContext.render(subflowOutputs);
FlowInputOutput flowInputOutput = ((DefaultRunContext)runContext).getApplicationContext().getBean(FlowInputOutput.class); // this is hacking
if (flow.getOutputs() != null && flowInputOutput != null) {
outputs = flowInputOutput.typedOutputs(flow, execution, outputs);
}
builder.outputs(outputs);
} catch (Exception e) {
runContext.logger().warn("Failed to extract outputs with the error: '{}'", e.getLocalizedMessage(), e);
var state = State.Type.fail(this);
Variables variables = variablesService.of(StorageContext.forTask(taskRun), builder.build());
taskRun = taskRun
.withState(state)
.withAttempts(Collections.singletonList(TaskRunAttempt.builder().state(new State().withState(state)).build()))
.withOutputs(variables);
return Optional.of(SubflowExecutionResult.builder()
.executionId(execution.getId())
.state(State.Type.FAILED)
.parentTaskRun(taskRun)
.build());
}
}
}

View File

@@ -1,4 +1,5 @@
@PluginSubGroup(
title = "HTTP",
description = "This sub-group of plugins contains tasks for making HTTP requests.",
categories = PluginSubGroup.PluginCategory.STORAGE
)

View File

@@ -9,11 +9,11 @@ import io.kestra.core.runners.DefaultRunContext;
import io.kestra.core.runners.RunContext;
import io.kestra.core.services.FlowService;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.constraints.NotNull;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.experimental.SuperBuilder;
import org.codehaus.commons.nullanalysis.NotNull;
import java.util.NoSuchElementException;

View File

@@ -1,4 +1,8 @@
@PluginSubGroup(categories = PluginSubGroup.PluginCategory.CORE)
@PluginSubGroup(
title = "KV",
description = "This sub-group of plugins contains tasks for interacting with the key-value (KV) store.\n",
categories = PluginSubGroup.PluginCategory.CORE
)
package io.kestra.plugin.core.kv;
import io.kestra.core.models.annotations.PluginSubGroup;

View File

@@ -0,0 +1,11 @@
<svg width="512" height="512" viewBox="0 0 512 512" fill="currentColor" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1765_9330)">
<path d="M244.592 215.915C251.569 208.938 262.881 208.938 269.858 215.915L298.537 244.595C305.514 251.572 305.514 262.883 298.537 269.86L269.858 298.54C262.881 305.517 251.569 305.517 244.592 298.54L215.913 269.86C208.936 262.883 208.936 251.572 215.913 244.595L244.592 215.915Z" />
<path d="M376.685 215.687C383.537 208.835 394.646 208.835 401.498 215.687L430.63 244.818C437.482 251.67 437.482 262.78 430.63 269.632L401.498 298.763C394.646 305.615 383.537 305.615 376.685 298.763L347.553 269.632C340.701 262.78 340.701 251.67 347.553 244.818L376.685 215.687Z" />
<path d="M244.818 83.8243C251.671 76.9722 262.78 76.9722 269.632 83.8243L298.763 112.956C305.616 119.808 305.616 130.917 298.763 137.769L269.632 166.901C262.78 173.753 251.671 173.753 244.818 166.901L215.687 137.769C208.835 130.917 208.835 119.808 215.687 112.956L244.818 83.8243Z" />
<path d="M232.611 178.663C239.588 185.64 239.588 196.951 232.611 203.928L203.931 232.608C196.955 239.585 185.643 239.585 178.666 232.608L149.986 203.928C143.01 196.952 143.01 185.64 149.986 178.663L178.666 149.983C185.643 143.006 196.955 143.006 203.931 149.983L232.611 178.663Z" />
<path d="M166.901 244.818C173.753 251.67 173.753 262.78 166.901 269.632L137.77 298.763C130.918 305.615 119.808 305.615 112.956 298.763L83.8246 269.632C76.9725 262.78 76.9725 251.67 83.8246 244.818L112.956 215.687C119.808 208.835 130.918 208.835 137.77 215.687L166.901 244.818Z" />
<path d="M364.472 178.663C371.449 185.64 371.449 196.951 364.472 203.928L335.793 232.608C328.816 239.585 317.504 239.585 310.527 232.608L281.848 203.928C274.871 196.952 274.871 185.64 281.848 178.663L310.527 149.983C317.504 143.006 328.816 143.006 335.793 149.983L364.472 178.663Z" />
<path d="M285.45 367.015C301.037 382.602 301.037 407.873 285.45 423.46C269.863 439.047 244.591 439.047 229.004 423.46C213.417 407.873 213.417 382.602 229.004 367.015C244.591 351.428 269.863 351.428 285.45 367.015Z" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@@ -0,0 +1,11 @@
<svg width="512" height="512" viewBox="0 0 512 512" fill="currentColor" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1765_9330)">
<path d="M244.592 215.915C251.569 208.938 262.881 208.938 269.858 215.915L298.537 244.595C305.514 251.572 305.514 262.883 298.537 269.86L269.858 298.54C262.881 305.517 251.569 305.517 244.592 298.54L215.913 269.86C208.936 262.883 208.936 251.572 215.913 244.595L244.592 215.915Z" />
<path d="M376.685 215.687C383.537 208.835 394.646 208.835 401.498 215.687L430.63 244.818C437.482 251.67 437.482 262.78 430.63 269.632L401.498 298.763C394.646 305.615 383.537 305.615 376.685 298.763L347.553 269.632C340.701 262.78 340.701 251.67 347.553 244.818L376.685 215.687Z" />
<path d="M244.818 83.8243C251.671 76.9722 262.78 76.9722 269.632 83.8243L298.763 112.956C305.616 119.808 305.616 130.917 298.763 137.769L269.632 166.901C262.78 173.753 251.671 173.753 244.818 166.901L215.687 137.769C208.835 130.917 208.835 119.808 215.687 112.956L244.818 83.8243Z" />
<path d="M232.611 178.663C239.588 185.64 239.588 196.951 232.611 203.928L203.931 232.608C196.955 239.585 185.643 239.585 178.666 232.608L149.986 203.928C143.01 196.952 143.01 185.64 149.986 178.663L178.666 149.983C185.643 143.006 196.955 143.006 203.931 149.983L232.611 178.663Z" />
<path d="M166.901 244.818C173.753 251.67 173.753 262.78 166.901 269.632L137.77 298.763C130.918 305.615 119.808 305.615 112.956 298.763L83.8246 269.632C76.9725 262.78 76.9725 251.67 83.8246 244.818L112.956 215.687C119.808 208.835 130.918 208.835 137.77 215.687L166.901 244.818Z" />
<path d="M364.472 178.663C371.449 185.64 371.449 196.951 364.472 203.928L335.793 232.608C328.816 239.585 317.504 239.585 310.527 232.608L281.848 203.928C274.871 196.952 274.871 185.64 281.848 178.663L310.527 149.983C317.504 143.006 328.816 143.006 335.793 149.983L364.472 178.663Z" />
<path d="M285.45 367.015C301.037 382.602 301.037 407.873 285.45 423.46C269.863 439.047 244.591 439.047 229.004 423.46C213.417 407.873 213.417 382.602 229.004 367.015C244.591 351.428 269.863 351.428 285.45 367.015Z" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.1 KiB

View File

@@ -112,7 +112,7 @@ class JsonSchemaGeneratorTest {
var requiredWithDefault = definitions.get("io.kestra.core.docs.JsonSchemaGeneratorTest-RequiredWithDefault");
assertThat(requiredWithDefault, is(notNullValue()));
assertThat((List<String>) requiredWithDefault.get("required"), not(contains("requiredWithDefault")));
assertThat((List<String>) requiredWithDefault.get("required"), not(containsInAnyOrder("requiredWithDefault", "anotherRequiredWithDefault")));
var properties = (Map<String, Map<String, Object>>) flow.get("properties");
var listeners = properties.get("listeners");
@@ -253,7 +253,7 @@ class JsonSchemaGeneratorTest {
void requiredAreRemovedIfThereIsADefault() {
Map<String, Object> generate = jsonSchemaGenerator.properties(Task.class, RequiredWithDefault.class);
assertThat(generate, is(not(nullValue())));
assertThat((List<String>) generate.get("required"), not(containsInAnyOrder("requiredWithDefault")));
assertThat((List<String>) generate.get("required"), not(containsInAnyOrder("requiredWithDefault", "anotherRequiredWithDefault")));
assertThat((List<String>) generate.get("required"), containsInAnyOrder("requiredWithNoDefault"));
}
@@ -466,6 +466,11 @@ class JsonSchemaGeneratorTest {
@Builder.Default
private Property<TaskWithEnum.TestClass> requiredWithDefault = Property.ofValue(TaskWithEnum.TestClass.builder().testProperty("test").build());
@PluginProperty
@NotNull
@Builder.Default
private Property<TaskWithEnum.TestClass> anotherRequiredWithDefault = Property.ofValue(TaskWithEnum.TestClass.builder().testProperty("test2").build());
@PluginProperty
@NotNull
private Property<TaskWithEnum.TestClass> requiredWithNoDefault;

View File

@@ -94,6 +94,14 @@ public class QueryFilterTest {
Arguments.of(QueryFilter.builder().field(Field.TRIGGER_ID).operation(Op.ENDS_WITH).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.TRIGGER_ID).operation(Op.CONTAINS).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.EQUALS).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.NOT_EQUALS).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.IN).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.NOT_IN).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.STARTS_WITH).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.ENDS_WITH).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.CONTAINS).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.CHILD_FILTER).operation(Op.EQUALS).build(), Resource.EXECUTION),
Arguments.of(QueryFilter.builder().field(Field.CHILD_FILTER).operation(Op.NOT_EQUALS).build(), Resource.EXECUTION),
@@ -204,6 +212,13 @@ public class QueryFilterTest {
Arguments.of(QueryFilter.builder().field(Field.TRIGGER_ID).operation(Op.REGEX).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.TRIGGER_ID).operation(Op.PREFIX).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.GREATER_THAN).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.LESS_THAN).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.GREATER_THAN_OR_EQUAL_TO).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.LESS_THAN_OR_EQUAL_TO).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.REGEX).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.EXECUTION_ID).operation(Op.PREFIX).build(), Resource.LOG),
Arguments.of(QueryFilter.builder().field(Field.CHILD_FILTER).operation(Op.GREATER_THAN).build(), Resource.EXECUTION),
Arguments.of(QueryFilter.builder().field(Field.CHILD_FILTER).operation(Op.LESS_THAN).build(), Resource.EXECUTION),
Arguments.of(QueryFilter.builder().field(Field.CHILD_FILTER).operation(Op.GREATER_THAN_OR_EQUAL_TO).build(), Resource.EXECUTION),

View File

@@ -10,6 +10,7 @@ import org.junit.jupiter.api.Test;
import java.net.URI;
import java.util.Map;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
@KestraTest
@@ -37,9 +38,9 @@ class VariablesTest {
@Test
@SuppressWarnings("unchecked")
void inStorage() {
var storageContext = StorageContext.forTask(null, "namespace", "flow", "execution", "task", "taskRun", null);
var storageContext = StorageContext.forTask(MAIN_TENANT, "namespace", "flow", "execution", "task", "taskRun", null);
var internalStorage = new InternalStorage(storageContext, storageInterface);
Variables.StorageContext variablesContext = new Variables.StorageContext(null, "namespace");
Variables.StorageContext variablesContext = new Variables.StorageContext(MAIN_TENANT, "namespace");
// simple
Map<String, Object> outputs = Map.of("key", "value");

View File

@@ -44,6 +44,7 @@ import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.time.temporal.ChronoUnit;
import java.util.*;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import static io.kestra.core.models.flows.FlowScope.USER;
@@ -198,6 +199,7 @@ public abstract class AbstractExecutionRepositoryTest {
return Stream.of(
QueryFilter.builder().field(Field.TIME_RANGE).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.TRIGGER_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.WORKER_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXISTING_ONLY).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.MIN_LEVEL).value(Level.DEBUG).operation(Op.EQUALS).build()
@@ -740,4 +742,16 @@ public abstract class AbstractExecutionRepositoryTest {
executions = executionRepository.find(Pageable.from(1, 10), MAIN_TENANT, filters);
assertThat(executions.size()).isEqualTo(0L);
}
@Test
protected void shouldReturnLastExecutionsWhenInputsAreNull() {
inject();
List<Execution> lastExecutions = executionRepository.lastExecutions(MAIN_TENANT, null);
assertThat(lastExecutions).isNotEmpty();
Set<String> flowIds = lastExecutions.stream().map(Execution::getFlowId).collect(Collectors.toSet());
assertThat(flowIds.size()).isEqualTo(lastExecutions.size());
}
}

View File

@@ -8,7 +8,6 @@ import io.kestra.core.models.flows.State;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.services.ExecutionService;
import io.kestra.core.storages.StorageInterface;
import io.kestra.plugin.core.debug.Return;
import io.kestra.core.utils.IdUtils;
import io.kestra.core.junit.annotations.KestraTest;
@@ -26,6 +25,7 @@ import java.time.ZonedDateTime;
import java.util.Map;
import java.util.Objects;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
@KestraTest
@@ -39,9 +39,6 @@ public abstract class AbstractExecutionServiceTest {
@Inject
LogRepositoryInterface logRepository;
@Inject
StorageInterface storageInterface;
@Inject
RunContextFactory runContextFactory;
@@ -56,12 +53,14 @@ public abstract class AbstractExecutionServiceTest {
Flow flow = Flow.builder()
.namespace("io.kestra.test")
.id("abc")
.tenantId(MAIN_TENANT)
.revision(1)
.build();
Execution execution = Execution
.builder()
.id(IdUtils.create())
.tenantId(MAIN_TENANT)
.state(state)
.flowId(flow.getId())
.namespace(flow.getNamespace())
@@ -74,6 +73,7 @@ public abstract class AbstractExecutionServiceTest {
.builder()
.namespace(flow.getNamespace())
.id(IdUtils.create())
.tenantId(MAIN_TENANT)
.executionId(execution.getId())
.flowId(flow.getId())
.taskId(task.getId())
@@ -94,6 +94,7 @@ public abstract class AbstractExecutionServiceTest {
for (int i = 0; i < 10; i++) {
logRepository.save(LogEntry.builder()
.executionId(execution.getId())
.tenantId(MAIN_TENANT)
.timestamp(Instant.now())
.message("Message " + i)
.flowId(flow.getId())
@@ -108,7 +109,7 @@ public abstract class AbstractExecutionServiceTest {
true,
true,
true,
null,
MAIN_TENANT,
flow.getNamespace(),
flow.getId(),
null,
@@ -126,7 +127,7 @@ public abstract class AbstractExecutionServiceTest {
true,
true,
true,
null,
MAIN_TENANT,
flow.getNamespace(),
flow.getId(),
null,

View File

@@ -160,6 +160,7 @@ public abstract class AbstractFlowRepositoryTest {
QueryFilter.builder().field(Field.TIME_RANGE).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.TRIGGER_EXECUTION_ID).value("executionTriggerId").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.TRIGGER_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.CHILD_FILTER).value(ChildFilter.CHILD).operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.WORKER_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXISTING_ONLY).value("test").operation(Op.EQUALS).build(),

View File

@@ -34,6 +34,7 @@ import static io.kestra.core.models.flows.FlowScope.SYSTEM;
import static io.kestra.core.models.flows.FlowScope.USER;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.assertThatReflectiveOperationException;
import static org.junit.jupiter.api.Assertions.assertThrows;
@KestraTest
@@ -42,11 +43,15 @@ public abstract class AbstractLogRepositoryTest {
protected LogRepositoryInterface logRepository;
protected static LogEntry.LogEntryBuilder logEntry(Level level) {
return logEntry(level, IdUtils.create());
}
protected static LogEntry.LogEntryBuilder logEntry(Level level, String executionId) {
return LogEntry.builder()
.flowId("flowId")
.namespace("io.kestra.unittest")
.taskId("taskId")
.executionId("executionId")
.executionId(executionId)
.taskRunId(IdUtils.create())
.attemptNumber(0)
.timestamp(Instant.now())
@@ -60,13 +65,36 @@ public abstract class AbstractLogRepositoryTest {
@ParameterizedTest
@MethodSource("filterCombinations")
void should_find_all(QueryFilter filter){
logRepository.save(logEntry(Level.INFO).build());
logRepository.save(logEntry(Level.INFO, "executionId").build());
ArrayListTotal<LogEntry> entries = logRepository.find(Pageable.UNPAGED, MAIN_TENANT, List.of(filter));
assertThat(entries).hasSize(1);
}
@ParameterizedTest
@MethodSource("filterCombinations")
void should_find_async(QueryFilter filter){
logRepository.save(logEntry(Level.INFO, "executionId").build());
Flux<LogEntry> find = logRepository.findAsync(MAIN_TENANT, List.of(filter));
List<LogEntry> logEntries = find.collectList().block();
assertThat(logEntries).hasSize(1);
}
@ParameterizedTest
@MethodSource("filterCombinations")
void should_delete_with_filter(QueryFilter filter){
logRepository.save(logEntry(Level.INFO, "executionId").build());
logRepository.deleteByFilters(MAIN_TENANT, List.of(filter));
assertThat(logRepository.findAllAsync(MAIN_TENANT).collectList().block()).isEmpty();
}
static Stream<QueryFilter> filterCombinations() {
return Stream.of(
QueryFilter.builder().field(Field.QUERY).value("flowId").operation(Op.EQUALS).build(),
@@ -105,6 +133,13 @@ public abstract class AbstractLogRepositoryTest {
QueryFilter.builder().field(Field.TRIGGER_ID).value("Id").operation(Op.ENDS_WITH).build(),
QueryFilter.builder().field(Field.TRIGGER_ID).value(List.of("triggerId")).operation(Op.IN).build(),
QueryFilter.builder().field(Field.TRIGGER_ID).value(List.of("anotherId")).operation(Op.NOT_IN).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("executionId").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("anotherId").operation(Op.NOT_EQUALS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("xecution").operation(Op.CONTAINS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("execution").operation(Op.STARTS_WITH).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("Id").operation(Op.ENDS_WITH).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value(List.of("executionId")).operation(Op.IN).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value(List.of("anotherId")).operation(Op.NOT_IN).build(),
QueryFilter.builder().field(Field.MIN_LEVEL).value(Level.DEBUG).operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.MIN_LEVEL).value(Level.ERROR).operation(Op.NOT_EQUALS).build()
);
@@ -284,36 +319,6 @@ public abstract class AbstractLogRepositoryTest {
assertThat(find.size()).isZero();
}
@Test
void findAsync() {
logRepository.save(logEntry(Level.INFO).build());
logRepository.save(logEntry(Level.ERROR).build());
logRepository.save(logEntry(Level.WARN).build());
logRepository.save(logEntry(Level.INFO).executionKind(ExecutionKind.TEST).build()); // should not be visible here
ZonedDateTime startDate = ZonedDateTime.now().minusSeconds(1);
Flux<LogEntry> find = logRepository.findAsync(MAIN_TENANT, "io.kestra.unittest", null, null, Level.INFO, startDate);
List<LogEntry> logEntries = find.collectList().block();
assertThat(logEntries).hasSize(3);
find = logRepository.findAsync(MAIN_TENANT, null, null, null, Level.ERROR, startDate);
logEntries = find.collectList().block();
assertThat(logEntries).hasSize(1);
find = logRepository.findAsync(MAIN_TENANT, "io.kestra.unittest", "flowId", null, Level.ERROR, startDate);
logEntries = find.collectList().block();
assertThat(logEntries).hasSize(1);
find = logRepository.findAsync(MAIN_TENANT, "io.kestra.unused", "flowId", "executionId", Level.INFO, startDate);
logEntries = find.collectList().block();
assertThat(logEntries).hasSize(0);
find = logRepository.findAsync(MAIN_TENANT, null, null, null, Level.INFO, startDate.plusSeconds(2));
logEntries = find.collectList().block();
assertThat(logEntries).hasSize(0);
}
@Test
void findAllAsync() {
logRepository.save(logEntry(Level.INFO).build());

View File

@@ -101,6 +101,7 @@ public abstract class AbstractTriggerRepositoryTest {
QueryFilter.builder().field(Field.STATE).value(State.Type.RUNNING).operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.TIME_RANGE).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.TRIGGER_EXECUTION_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXECUTION_ID).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.CHILD_FILTER).value(ChildFilter.CHILD).operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.EXISTING_ONLY).value("test").operation(Op.EQUALS).build(),
QueryFilter.builder().field(Field.MIN_LEVEL).value(Level.DEBUG).operation(Op.EQUALS).build()

View File

@@ -387,6 +387,13 @@ public abstract class AbstractRunnerTest {
forEachItemCaseTest.forEachItemInIf();
}
@Test
@LoadFlows({"flows/valids/for-each-item-subflow-after-execution.yaml",
"flows/valids/for-each-item-after-execution.yaml"})
protected void forEachItemWithAfterExecution() throws Exception {
forEachItemCaseTest.forEachItemWithAfterExecution();
}
@Test
@LoadFlows({"flows/valids/flow-concurrency-cancel.yml"})
void concurrencyCancel() throws Exception {
@@ -423,6 +430,18 @@ public abstract class AbstractRunnerTest {
flowConcurrencyCaseTest.flowConcurrencyWithForEachItem();
}
@Test
@LoadFlows({"flows/valids/flow-concurrency-queue-fail.yml"})
protected void concurrencyQueueRestarted() throws Exception {
flowConcurrencyCaseTest.flowConcurrencyQueueRestarted();
}
@Test
@LoadFlows({"flows/valids/flow-concurrency-queue-after-execution.yml"})
void concurrencyQueueAfterExecution() throws Exception {
flowConcurrencyCaseTest.flowConcurrencyQueueAfterExecution();
}
@Test
@ExecuteFlow("flows/valids/executable-fail.yml")
void badExecutable(Execution execution) {

View File

@@ -8,6 +8,7 @@ import io.kestra.core.queues.QueueException;
import io.kestra.core.queues.QueueFactoryInterface;
import io.kestra.core.queues.QueueInterface;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.services.ExecutionService;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.utils.TestsUtils;
import jakarta.inject.Inject;
@@ -53,6 +54,9 @@ public class FlowConcurrencyCaseTest {
@Named(QueueFactoryInterface.EXECUTION_NAMED)
protected QueueInterface<Execution> executionQueue;
@Inject
private ExecutionService executionService;
public void flowConcurrencyCancel() throws TimeoutException, QueueException, InterruptedException {
Execution execution1 = runnerUtils.runOneUntilRunning(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-cancel", null, null, Duration.ofSeconds(30));
Execution execution2 = runnerUtils.runOne(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-cancel");
@@ -278,6 +282,115 @@ public class FlowConcurrencyCaseTest {
assertThat(terminated.getState().getCurrent()).isEqualTo(Type.SUCCESS);
}
public void flowConcurrencyQueueRestarted() throws Exception {
Execution execution1 = runnerUtils.runOneUntilRunning(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-queue-fail", null, null, Duration.ofSeconds(30));
Flow flow = flowRepository
.findById(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-queue-fail", Optional.empty())
.orElseThrow();
Execution execution2 = Execution.newExecution(flow, null, null, Optional.empty());
executionQueue.emit(execution2);
assertThat(execution1.getState().isRunning()).isTrue();
assertThat(execution2.getState().getCurrent()).isEqualTo(State.Type.CREATED);
var executionResult1 = new AtomicReference<Execution>();
var executionResult2 = new AtomicReference<Execution>();
CountDownLatch latch1 = new CountDownLatch(2);
AtomicReference<Execution> failedExecution = new AtomicReference<>();
CountDownLatch latch2 = new CountDownLatch(1);
CountDownLatch latch3 = new CountDownLatch(1);
Flux<Execution> receive = TestsUtils.receive(executionQueue, e -> {
if (e.getLeft().getId().equals(execution1.getId())) {
executionResult1.set(e.getLeft());
if (e.getLeft().getState().getCurrent() == Type.FAILED) {
failedExecution.set(e.getLeft());
latch1.countDown();
}
}
if (e.getLeft().getId().equals(execution2.getId())) {
executionResult2.set(e.getLeft());
if (e.getLeft().getState().getCurrent() == State.Type.RUNNING) {
latch2.countDown();
}
if (e.getLeft().getState().getCurrent() == Type.FAILED) {
latch3.countDown();
}
}
});
assertTrue(latch2.await(1, TimeUnit.MINUTES));
assertThat(failedExecution.get()).isNotNull();
// here the first fail and the second is now running.
// we restart the first one, it should be queued then fail again.
Execution restarted = executionService.restart(failedExecution.get(), null);
executionQueue.emit(restarted);
assertTrue(latch3.await(1, TimeUnit.MINUTES));
assertTrue(latch1.await(1, TimeUnit.MINUTES));
receive.blockLast();
assertThat(executionResult1.get().getState().getCurrent()).isEqualTo(Type.FAILED);
// it should have been queued after restarted
assertThat(executionResult1.get().getState().getHistories().stream().anyMatch(history -> history.getState() == Type.RESTARTED)).isTrue();
assertThat(executionResult1.get().getState().getHistories().stream().anyMatch(history -> history.getState() == Type.QUEUED)).isTrue();
assertThat(executionResult2.get().getState().getCurrent()).isEqualTo(Type.FAILED);
assertThat(executionResult2.get().getState().getHistories().getFirst().getState()).isEqualTo(State.Type.CREATED);
assertThat(executionResult2.get().getState().getHistories().get(1).getState()).isEqualTo(State.Type.QUEUED);
assertThat(executionResult2.get().getState().getHistories().get(2).getState()).isEqualTo(State.Type.RUNNING);
}
public void flowConcurrencyQueueAfterExecution() throws TimeoutException, QueueException, InterruptedException {
Execution execution1 = runnerUtils.runOneUntilRunning(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-queue-after-execution", null, null, Duration.ofSeconds(30));
Flow flow = flowRepository
.findById(MAIN_TENANT, "io.kestra.tests", "flow-concurrency-queue-after-execution", Optional.empty())
.orElseThrow();
Execution execution2 = Execution.newExecution(flow, null, null, Optional.empty());
executionQueue.emit(execution2);
assertThat(execution1.getState().isRunning()).isTrue();
assertThat(execution2.getState().getCurrent()).isEqualTo(State.Type.CREATED);
var executionResult1 = new AtomicReference<Execution>();
var executionResult2 = new AtomicReference<Execution>();
CountDownLatch latch1 = new CountDownLatch(1);
CountDownLatch latch2 = new CountDownLatch(1);
CountDownLatch latch3 = new CountDownLatch(1);
Flux<Execution> receive = TestsUtils.receive(executionQueue, e -> {
if (e.getLeft().getId().equals(execution1.getId())) {
executionResult1.set(e.getLeft());
if (e.getLeft().getState().getCurrent() == State.Type.SUCCESS) {
latch1.countDown();
}
}
if (e.getLeft().getId().equals(execution2.getId())) {
executionResult2.set(e.getLeft());
if (e.getLeft().getState().getCurrent() == State.Type.RUNNING) {
latch2.countDown();
}
if (e.getLeft().getState().getCurrent() == State.Type.SUCCESS) {
latch3.countDown();
}
}
});
assertTrue(latch1.await(1, TimeUnit.MINUTES));
assertTrue(latch2.await(1, TimeUnit.MINUTES));
assertTrue(latch3.await(1, TimeUnit.MINUTES));
receive.blockLast();
assertThat(executionResult1.get().getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(executionResult2.get().getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(executionResult2.get().getState().getHistories().getFirst().getState()).isEqualTo(State.Type.CREATED);
assertThat(executionResult2.get().getState().getHistories().get(1).getState()).isEqualTo(State.Type.QUEUED);
assertThat(executionResult2.get().getState().getHistories().get(2).getState()).isEqualTo(State.Type.RUNNING);
}
private URI storageUpload() throws URISyntaxException, IOException {
File tempFile = File.createTempFile("file", ".txt");

View File

@@ -1,5 +1,7 @@
package io.kestra.core.runners;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.flows.DependsOn;
@@ -21,7 +23,6 @@ import org.reactivestreams.Publisher;
import reactor.core.publisher.Mono;
import java.io.ByteArrayInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.nio.ByteBuffer;
@@ -200,7 +201,7 @@ class FlowInputOutputTest {
}
@Test
void shouldNotUploadFileInputAfterValidation() throws IOException {
void shouldNotUploadFileInputAfterValidation() {
// Given
FileInput input = FileInput
.builder()
@@ -215,7 +216,7 @@ class FlowInputOutputTest {
// Then
Assertions.assertNull(values.getFirst().exception());
Assertions.assertFalse(storageInterface.exists(null, null, URI.create(values.getFirst().value().toString())));
Assertions.assertFalse(storageInterface.exists(MAIN_TENANT, null, URI.create(values.getFirst().value().toString())));
}
@Test

View File

@@ -9,6 +9,7 @@ import io.kestra.core.utils.TestsUtils;
import io.kestra.core.junit.annotations.KestraTest;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import org.junit.jupiter.api.RepeatedTest;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.event.Level;
@@ -109,7 +110,8 @@ class RunContextLoggerTest {
logger.info("test myawesomepassmyawesomepass myawesomepass myawesomepassmyawesomepass");
logger.warn("test {}", URI.create("http://it-s.secret"));
matchingLog = TestsUtils.awaitLogs(logs, 3);
// the 3 logs will create 4 log entries as exceptions stacktraces are logged separately at the TRACE level
matchingLog = TestsUtils.awaitLogs(logs, 4);
receive.blockLast();
assertThat(matchingLog.stream().filter(logEntry -> logEntry.getLevel().equals(Level.DEBUG)).findFirst().orElseThrow().getMessage()).isEqualTo("test john@****** test");
assertThat(matchingLog.stream().filter(logEntry -> logEntry.getLevel().equals(Level.TRACE)).findFirst().orElseThrow().getMessage()).contains("exception from doe.com");

View File

@@ -83,4 +83,24 @@ class RunContextPropertyTest {
runContextProperty = new RunContextProperty<>(Property.<Map<String, String>>builder().expression("{ \"key\": \"{{ key }}\"}").build(), runContext);
assertThat(runContextProperty.asMap(String.class, String.class, Map.of("key", "value"))).containsEntry("key", "value");
}
@Test
void asShouldReturnCachedRenderedProperty() throws IllegalVariableEvaluationException {
var runContext = runContextFactory.of();
var runContextProperty = new RunContextProperty<>(Property.<String>builder().expression("{{ variable }}").build(), runContext);
assertThat(runContextProperty.as(String.class, Map.of("variable", "value1"))).isEqualTo(Optional.of("value1"));
assertThat(runContextProperty.as(String.class, Map.of("variable", "value2"))).isEqualTo(Optional.of("value1"));
}
@Test
void asShouldNotReturnCachedRenderedPropertyWithSkipCache() throws IllegalVariableEvaluationException {
var runContext = runContextFactory.of();
var runContextProperty = new RunContextProperty<>(Property.<String>builder().expression("{{ variable }}").build(), runContext);
assertThat(runContextProperty.as(String.class, Map.of("variable", "value1"))).isEqualTo(Optional.of("value1"));
assertThat(runContextProperty.skipCache().as(String.class, Map.of("variable", "value2"))).isEqualTo(Optional.of("value2"));
}
}

View File

@@ -5,6 +5,7 @@ import io.kestra.core.junit.annotations.LoadFlows;
import io.kestra.core.models.annotations.Plugin;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.flows.State;
import io.kestra.core.models.property.Property;
import io.kestra.core.models.tasks.RunnableTask;
import io.kestra.core.models.tasks.Task;
import io.kestra.core.queues.QueueException;
@@ -18,6 +19,7 @@ import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Disabled;
import org.junit.jupiter.api.Test;
import java.util.Map;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicInteger;
@@ -77,8 +79,12 @@ public class TaskCacheTest {
@Plugin
public static class CounterTask extends Task implements RunnableTask<CounterTask.Output> {
private String workingDir;
@Override
public Output run(RunContext runContext) throws Exception {
Map<String, Object> variables = Map.of("workingDir", runContext.workingDir().path().toString());
runContext.render(this.workingDir, variables);
return Output.builder()
.counter(COUNTER.incrementAndGet())
.build();

View File

@@ -0,0 +1,39 @@
package io.kestra.core.runners.pebble.functions;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import io.kestra.core.utils.IdUtils;
import java.util.Map;
public class FunctionTestUtils {
public static final String NAMESPACE = "io.kestra.tests";
public static Map<String, Object> getVariables() {
return getVariables(NAMESPACE);
}
public static Map<String, Object> getVariables(String namespace) {
return Map.of(
"flow", Map.of(
"id", "kv",
"tenantId", MAIN_TENANT,
"namespace", namespace)
);
}
public static Map<String, Object> getVariablesWithExecution(String namespace) {
return getVariablesWithExecution(namespace, IdUtils.create());
}
public static Map<String, Object> getVariablesWithExecution(String namespace, String executionId) {
return Map.of(
"flow", Map.of(
"id", "flow",
"namespace", namespace,
"tenantId", MAIN_TENANT),
"execution", Map.of("id", executionId)
);
}
}

View File

@@ -1,39 +1,24 @@
package io.kestra.core.runners.pebble.functions;
import static io.kestra.core.runners.pebble.functions.FunctionTestUtils.getVariables;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import io.kestra.core.exceptions.IllegalVariableEvaluationException;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.junit.annotations.LoadFlows;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.executions.LogEntry;
import io.kestra.core.models.executions.TaskRun;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.models.flows.State;
import io.kestra.core.models.property.Property;
import io.kestra.core.queues.QueueException;
import io.kestra.core.queues.QueueInterface;
import io.kestra.core.runners.RunnerUtils;
import io.kestra.core.runners.VariableRenderer;
import io.kestra.core.storages.StorageContext;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.storages.kv.InternalKVStore;
import io.kestra.core.storages.kv.KVStore;
import io.kestra.core.storages.kv.KVValueAndMetadata;
import io.kestra.core.utils.TestsUtils;
import io.kestra.plugin.core.debug.Return;
import io.pebbletemplates.pebble.error.PebbleException;
import jakarta.inject.Inject;
import java.io.IOException;
import java.net.URI;
import java.util.List;
import java.util.Map;
import java.util.concurrent.TimeoutException;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.BeforeEach;
import org.junit.jupiter.api.Test;
import reactor.core.publisher.Flux;
@KestraTest(startRunner = true)
public class KvFunctionTest {
@@ -46,20 +31,16 @@ public class KvFunctionTest {
@BeforeEach
void reset() throws IOException {
storageInterface.deleteByPrefix(null, null, URI.create(StorageContext.kvPrefix("io.kestra.tests")));
storageInterface.deleteByPrefix(MAIN_TENANT, null, URI.create(StorageContext.kvPrefix("io.kestra.tests")));
}
@Test
void shouldGetValueFromKVGivenExistingKey() throws IllegalVariableEvaluationException, IOException {
// Given
KVStore kv = new InternalKVStore(null, "io.kestra.tests", storageInterface);
KVStore kv = new InternalKVStore(MAIN_TENANT, "io.kestra.tests", storageInterface);
kv.put("my-key", new KVValueAndMetadata(null, Map.of("field", "value")));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "io.kestra.tests")
);
Map<String, Object> variables = getVariables("io.kestra.tests");
// When
String rendered = variableRenderer.render("{{ kv('my-key') }}", variables);
@@ -71,17 +52,13 @@ public class KvFunctionTest {
@Test
void shouldGetValueFromKVGivenExistingKeyWithInheritance() throws IllegalVariableEvaluationException, IOException {
// Given
KVStore kv = new InternalKVStore(null, "my.company", storageInterface);
KVStore kv = new InternalKVStore(MAIN_TENANT, "my.company", storageInterface);
kv.put("my-key", new KVValueAndMetadata(null, Map.of("field", "value")));
KVStore firstKv = new InternalKVStore(null, "my", storageInterface);
KVStore firstKv = new InternalKVStore(MAIN_TENANT, "my", storageInterface);
firstKv.put("my-key", new KVValueAndMetadata(null, Map.of("field", "firstValue")));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "my.company.team")
);
Map<String, Object> variables = getVariables("my.company.team");
// When
String rendered = variableRenderer.render("{{ kv('my-key') }}", variables);
@@ -93,14 +70,10 @@ public class KvFunctionTest {
@Test
void shouldNotGetValueFromKVWithGivenNamespaceAndInheritance() throws IOException {
// Given
KVStore kv = new InternalKVStore(null, "kv", storageInterface);
KVStore kv = new InternalKVStore(MAIN_TENANT, "kv", storageInterface);
kv.put("my-key", new KVValueAndMetadata(null, Map.of("field", "value")));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "my.company.team")
);
Map<String, Object> variables = getVariables("my.company.team");
// When
Assertions.assertThrows(IllegalVariableEvaluationException.class, () ->
@@ -110,14 +83,10 @@ public class KvFunctionTest {
@Test
void shouldGetValueFromKVGivenExistingAndNamespace() throws IllegalVariableEvaluationException, IOException {
// Given
KVStore kv = new InternalKVStore(null, "kv", storageInterface);
KVStore kv = new InternalKVStore(MAIN_TENANT, "kv", storageInterface);
kv.put("my-key", new KVValueAndMetadata(null, Map.of("field", "value")));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "io.kestra.tests")
);
Map<String, Object> variables = getVariables("io.kestra.tests");
// When
String rendered = variableRenderer.render("{{ kv('my-key', namespace='kv') }}", variables);
@@ -129,11 +98,7 @@ public class KvFunctionTest {
@Test
void shouldGetEmptyGivenNonExistingKeyAndErrorOnMissingFalse() throws IllegalVariableEvaluationException {
// Given
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "io.kestra.tests")
);
Map<String, Object> variables = getVariables("io.kestra.tests");
// When
String rendered = variableRenderer.render("{{ kv('my-key', errorOnMissing=false) }}", variables);
@@ -145,11 +110,7 @@ public class KvFunctionTest {
@Test
void shouldFailGivenNonExistingKeyAndErrorOnMissingTrue() {
// Given
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "io.kestra.tests")
);
Map<String, Object> variables = getVariables("io.kestra.tests");
// When
IllegalVariableEvaluationException exception = Assertions.assertThrows(IllegalVariableEvaluationException.class, () -> {
@@ -163,11 +124,7 @@ public class KvFunctionTest {
@Test
void shouldFailGivenNonExistingKeyUsingDefaults() {
// Given
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "kv",
"namespace", "io.kestra.tests")
);
Map<String, Object> variables = getVariables("io.kestra.tests");
// When
IllegalVariableEvaluationException exception = Assertions.assertThrows(IllegalVariableEvaluationException.class, () -> {
variableRenderer.render("{{ kv('my-key') }}", variables);
@@ -176,4 +133,5 @@ public class KvFunctionTest {
// Then
assertThat(exception.getMessage()).isEqualTo("io.pebbletemplates.pebble.error.PebbleException: The key 'my-key' does not exist in the namespace 'io.kestra.tests'. ({{ kv('my-key') }}:1)");
}
}

View File

@@ -20,6 +20,9 @@ import java.net.URI;
import java.nio.file.Files;
import java.util.Map;
import static io.kestra.core.runners.pebble.functions.FunctionTestUtils.NAMESPACE;
import static io.kestra.core.runners.pebble.functions.FunctionTestUtils.getVariables;
import static io.kestra.core.runners.pebble.functions.FunctionTestUtils.getVariablesWithExecution;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
@@ -35,48 +38,39 @@ class ReadFileFunctionTest {
@Test
void readNamespaceFile() throws IllegalVariableEvaluationException, IOException {
String namespace = "io.kestra.tests";
String filePath = "file.txt";
storageInterface.createDirectory(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace)));
storageInterface.put(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace) + "/" + filePath), new ByteArrayInputStream("Hello from {{ flow.namespace }}".getBytes()));
storageInterface.createDirectory(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE)));
storageInterface.put(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE) + "/" + filePath), new ByteArrayInputStream("Hello from {{ flow.namespace }}".getBytes()));
String render = variableRenderer.render("{{ render(read('" + filePath + "')) }}", Map.of("flow", Map.of("namespace", namespace, "tenantId", MAIN_TENANT)));
assertThat(render).isEqualTo("Hello from " + namespace);
String render = variableRenderer.render("{{ render(read('" + filePath + "')) }}", getVariables());
assertThat(render).isEqualTo("Hello from " + NAMESPACE);
}
@Test
void readNamespaceFileFromURI() throws IllegalVariableEvaluationException, IOException {
String namespace = "io.kestra.tests";
String filePath = "file.txt";
storageInterface.createDirectory(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace)));
storageInterface.put(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace) + "/" + filePath), new ByteArrayInputStream("Hello from {{ flow.namespace }}".getBytes()));
storageInterface.createDirectory(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE)));
storageInterface.put(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE) + "/" + filePath), new ByteArrayInputStream("Hello from {{ flow.namespace }}".getBytes()));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "flow",
"namespace", namespace,
"tenantId", MAIN_TENANT),
"execution", Map.of("id", IdUtils.create())
);
Map<String, Object> variables = getVariablesWithExecution(NAMESPACE);
String render = variableRenderer.render("{{ render(read(fileURI('" + filePath + "'))) }}", variables);
assertThat(render).isEqualTo("Hello from " + namespace);
assertThat(render).isEqualTo("Hello from " + NAMESPACE);
}
@Test
void readNamespaceFileWithNamespace() throws IllegalVariableEvaluationException, IOException {
String namespace = "io.kestra.tests";
String filePath = "file.txt";
storageInterface.createDirectory(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace)));
storageInterface.put(MAIN_TENANT, namespace, URI.create(StorageContext.namespaceFilePrefix(namespace) + "/" + filePath), new ByteArrayInputStream("Hello but not from flow.namespace".getBytes()));
storageInterface.createDirectory(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE)));
storageInterface.put(MAIN_TENANT, NAMESPACE, URI.create(StorageContext.namespaceFilePrefix(NAMESPACE) + "/" + filePath), new ByteArrayInputStream("Hello but not from flow.namespace".getBytes()));
String render = variableRenderer.render("{{ read('" + filePath + "', namespace='" + namespace + "') }}", Map.of("flow", Map.of("namespace", "flow.namespace", "tenantId", MAIN_TENANT)));
String render = variableRenderer.render("{{ read('" + filePath + "', namespace='" + NAMESPACE + "') }}", getVariables("different.namespace"));
assertThat(render).isEqualTo("Hello but not from flow.namespace");
}
@Test
void readUnknownNamespaceFile() {
IllegalVariableEvaluationException illegalVariableEvaluationException = assertThrows(IllegalVariableEvaluationException.class, () -> variableRenderer.render("{{ read('unknown.txt') }}", Map.of("flow", Map.of("namespace", "io.kestra.tests"))));
IllegalVariableEvaluationException illegalVariableEvaluationException = assertThrows(IllegalVariableEvaluationException.class, () -> variableRenderer.render("{{ read('unknown.txt') }}", getVariables()));
assertThat(illegalVariableEvaluationException.getCause().getCause().getClass()).isEqualTo(FileNotFoundException.class);
}
@@ -90,13 +84,7 @@ class ReadFileFunctionTest {
URI internalStorageFile = storageInterface.put(MAIN_TENANT, namespace, internalStorageURI, new ByteArrayInputStream("Hello from a task output".getBytes()));
// test for an authorized execution
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", flowId,
"namespace", namespace,
"tenantId", MAIN_TENANT),
"execution", Map.of("id", executionId)
);
Map<String, Object> variables = getVariablesWithExecution(namespace, executionId);
String render = variableRenderer.render("{{ read('" + internalStorageFile + "') }}", variables);
assertThat(render).isEqualTo("Hello from a task output");
@@ -169,13 +157,7 @@ class ReadFileFunctionTest {
URI internalStorageURI = URI.create("/" + namespace.replace(".", "/") + "/" + flowId + "/executions/" + executionId + "/tasks/task/" + IdUtils.create() + "/123456.ion");
URI internalStorageFile = storageInterface.put(MAIN_TENANT, namespace, internalStorageURI, new ByteArrayInputStream("Hello from a task output".getBytes()));
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "notme",
"namespace", "notme",
"tenantId", MAIN_TENANT),
"execution", Map.of("id", "notme")
);
Map<String, Object> variables = getVariablesWithExecution("notme", "notme");
String render = variableRenderer.render("{{ read('" + internalStorageFile + "') }}", variables);
assertThat(render).isEqualTo("Hello from a task output");
@@ -191,13 +173,7 @@ class ReadFileFunctionTest {
@Test
void shouldFailProcessingUnsupportedScheme() {
Map<String, Object> variables = Map.of(
"flow", Map.of(
"id", "notme",
"namespace", "notme",
"tenantId", MAIN_TENANT),
"execution", Map.of("id", "notme")
);
Map<String, Object> variables = getVariablesWithExecution("notme", "notme");
assertThrows(IllegalArgumentException.class, () -> variableRenderer.render("{{ read('unsupported://path-to/file.txt') }}", variables));
}

View File

@@ -372,4 +372,44 @@ class FlowServiceTest {
assertThat(exceptions.size()).isZero();
}
@Test
void shouldReturnValidationForRunnablePropsOnFlowable() {
// Given
String source = """
id: dolphin_164914
namespace: company.team
tasks:
- id: for
type: io.kestra.plugin.core.flow.ForEach
values: [1, 2, 3]
workerGroup:
key: toto
timeout: PT10S
taskCache:
enabled: true
tasks:
- id: hello
type: io.kestra.plugin.core.log.Log
message: Hello World! 🚀
workerGroup:
key: toto
timeout: PT10S
taskCache:
enabled: true
""";
// When
List<ValidateConstraintViolation> results = flowService.validate("my-tenant", source);
// Then
assertThat(results).hasSize(1);
assertThat(results.getFirst().getWarnings()).hasSize(3);
assertThat(results.getFirst().getWarnings()).containsExactlyInAnyOrder(
"The task 'for' cannot use the 'timeout' property as it's only relevant for runnable tasks.",
"The task 'for' cannot use the 'taskCache' property as it's only relevant for runnable tasks.",
"The task 'for' cannot use the 'workerGroup' property as it's only relevant for runnable tasks."
);
}
}

View File

@@ -1,7 +1,8 @@
package io.kestra.core.services;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.storages.kv.*;
import io.micronaut.test.annotation.MockBean;
@@ -26,25 +27,25 @@ class KVStoreServiceTest {
@Test
void shouldGetKVStoreForExistingNamespaceGivenFromNull() {
Assertions.assertNotNull(storeService.get(null, TEST_EXISTING_NAMESPACE, null));
Assertions.assertNotNull(storeService.get(MAIN_TENANT, TEST_EXISTING_NAMESPACE, null));
}
@Test
void shouldThrowExceptionWhenAccessingKVStoreForNonExistingNamespace() {
KVStoreException exception = Assertions.assertThrows(KVStoreException.class, () -> storeService.get(null, "io.kestra.unittest.unknown", null));
KVStoreException exception = Assertions.assertThrows(KVStoreException.class, () -> storeService.get(MAIN_TENANT, "io.kestra.unittest.unknown", null));
Assertions.assertTrue(exception.getMessage().contains("namespace 'io.kestra.unittest.unknown' does not exist"));
}
@Test
void shouldGetKVStoreForAnyNamespaceWhenAccessingFromChildNamespace() {
Assertions.assertNotNull(storeService.get(null, "io.kestra", TEST_EXISTING_NAMESPACE));
Assertions.assertNotNull(storeService.get(MAIN_TENANT, "io.kestra", TEST_EXISTING_NAMESPACE));
}
@Test
void shouldGetKVStoreFromNonExistingNamespaceWithAKV() throws IOException {
KVStore kvStore = new InternalKVStore(null, "system", storageInterface);
KVStore kvStore = new InternalKVStore(MAIN_TENANT, "system", storageInterface);
kvStore.put("key", new KVValueAndMetadata(new KVMetadata("myDescription", Duration.ofHours(1)), "value"));
Assertions.assertNotNull(storeService.get(null, "system", null));
Assertions.assertNotNull(storeService.get(MAIN_TENANT, "system", null));
}
@MockBean(NamespaceService.class)

View File

@@ -27,6 +27,7 @@ import java.util.Optional;
import java.util.function.Function;
import java.util.stream.Collectors;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.within;
@@ -91,7 +92,7 @@ class InternalKVStoreTest {
kv.put(TEST_KV_KEY, new KVValueAndMetadata(new KVMetadata(description, Duration.ofMinutes(5)), complexValue));
// Then
StorageObject withMetadata = storageInterface.getWithMetadata(null, kv.namespace(), URI.create("/" + kv.namespace().replace(".", "/") + "/_kv/my-key.ion"));
StorageObject withMetadata = storageInterface.getWithMetadata(MAIN_TENANT, kv.namespace(), URI.create("/" + kv.namespace().replace(".", "/") + "/_kv/my-key.ion"));
String valueFile = new String(withMetadata.inputStream().readAllBytes());
Instant expirationDate = Instant.parse(withMetadata.metadata().get("expirationDate"));
assertThat(expirationDate.isAfter(before.plus(Duration.ofMinutes(4))) && expirationDate.isBefore(before.plus(Duration.ofMinutes(6)))).isTrue();
@@ -102,7 +103,7 @@ class InternalKVStoreTest {
kv.put(TEST_KV_KEY, new KVValueAndMetadata(new KVMetadata(null, Duration.ofMinutes(10)), "some-value"));
// Then
withMetadata = storageInterface.getWithMetadata(null, kv.namespace(), URI.create("/" + kv.namespace().replace(".", "/") + "/_kv/my-key.ion"));
withMetadata = storageInterface.getWithMetadata(MAIN_TENANT, kv.namespace(), URI.create("/" + kv.namespace().replace(".", "/") + "/_kv/my-key.ion"));
valueFile = new String(withMetadata.inputStream().readAllBytes());
expirationDate = Instant.parse(withMetadata.metadata().get("expirationDate"));
assertThat(expirationDate.isAfter(before.plus(Duration.ofMinutes(9))) && expirationDate.isBefore(before.plus(Duration.ofMinutes(11)))).isTrue();
@@ -176,6 +177,6 @@ class InternalKVStoreTest {
private InternalKVStore kv() {
final String namespaceId = "io.kestra." + IdUtils.create();
return new InternalKVStore(null, namespaceId, storageInterface);
return new InternalKVStore(MAIN_TENANT, namespaceId, storageInterface);
}
}

View File

@@ -17,8 +17,8 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import static org.hamcrest.Matchers.is;
class InternalNamespaceTest {
@@ -38,7 +38,7 @@ class InternalNamespaceTest {
void shouldGetAllNamespaceFiles() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
// When
namespace.putFile(Path.of("/sub/dir/file1.txt"), new ByteArrayInputStream("1".getBytes()));
@@ -56,7 +56,7 @@ class InternalNamespaceTest {
void shouldPutFileGivenNoTenant() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
// When
NamespaceFile namespaceFile = namespace.putFile(Path.of("/sub/dir/file.txt"), new ByteArrayInputStream("1".getBytes()));
@@ -73,7 +73,7 @@ class InternalNamespaceTest {
void shouldSucceedPutFileGivenExistingFileForConflictOverwrite() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
NamespaceFile namespaceFile = namespace.get(Path.of("/sub/dir/file.txt"));
@@ -92,7 +92,7 @@ class InternalNamespaceTest {
void shouldFailPutFileGivenExistingFileForError() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
NamespaceFile namespaceFile = namespace.get(Path.of("/sub/dir/file.txt"));
@@ -109,7 +109,7 @@ class InternalNamespaceTest {
void shouldIgnorePutFileGivenExistingFileForSkip() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
NamespaceFile namespaceFile = namespace.get(Path.of("/sub/dir/file.txt"));
@@ -128,7 +128,7 @@ class InternalNamespaceTest {
void shouldFindAllMatchingGivenNoTenant() throws IOException, URISyntaxException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
// When
namespace.putFile(Path.of("/a/b/c/1.sql"), new ByteArrayInputStream("1".getBytes()));
@@ -171,7 +171,7 @@ class InternalNamespaceTest {
void shouldReturnNoNamespaceFileForEmptyNamespace() throws IOException {
// Given
final String namespaceId = "io.kestra." + IdUtils.create();
final InternalNamespace namespace = new InternalNamespace(logger, null, namespaceId, storageInterface);
final InternalNamespace namespace = new InternalNamespace(logger, MAIN_TENANT, namespaceId, storageInterface);
List<NamespaceFile> namespaceFiles = namespace.findAllFilesMatching((unused) -> true);
assertThat(namespaceFiles.size()).isZero();
}

View File

@@ -0,0 +1,462 @@
package io.kestra.core.utils;
import org.junit.jupiter.api.Test;
import java.util.NoSuchElementException;
import java.util.Optional;
import java.util.function.Function;
import static org.assertj.core.api.Assertions.*;
class EitherTest {
@Test
void shouldCreateLeftInstance() {
// Given
String leftValue = "error";
// When
Either<String, Integer> either = Either.left(leftValue);
// Then
assertThat(either).isInstanceOf(Either.Left.class);
assertThat(either.isLeft()).isTrue();
assertThat(either.isRight()).isFalse();
assertThat(either.getLeft()).isEqualTo(leftValue);
}
@Test
void shouldCreateRightInstance() {
// Given
Integer rightValue = 42;
// When
Either<String, Integer> either = Either.right(rightValue);
// Then
assertThat(either).isInstanceOf(Either.Right.class);
assertThat(either.isRight()).isTrue();
assertThat(either.isLeft()).isFalse();
assertThat(either.getRight()).isEqualTo(rightValue);
}
@Test
void shouldCreateLeftWithNullValue() {
// When
Either<String, Integer> either = Either.left(null);
// Then
assertThat(either.isLeft()).isTrue();
assertThat(either.getLeft()).isNull();
}
@Test
void shouldCreateRightWithNullValue() {
// When
Either<String, Integer> either = Either.right(null);
// Then
assertThat(either.isRight()).isTrue();
assertThat(either.getRight()).isNull();
}
@Test
void leftShouldReturnCorrectValues() {
// Given
String leftValue = "error message";
Either<String, Integer> either = Either.left(leftValue);
// Then
assertThat(either.isLeft()).isTrue();
assertThat(either.isRight()).isFalse();
assertThat(either.getLeft()).isEqualTo(leftValue);
}
@Test
void leftShouldThrowExceptionWhenGettingRightValue() {
// Given
Either<String, Integer> either = Either.left("error");
// When/Then
assertThatThrownBy(either::getRight)
.isInstanceOf(NoSuchElementException.class)
.hasMessage("This is Left");
}
@Test
void rightShouldReturnCorrectValues() {
// Given
Integer rightValue = 100;
Either<String, Integer> either = Either.right(rightValue);
// Then
assertThat(either.isRight()).isTrue();
assertThat(either.isLeft()).isFalse();
assertThat(either.getRight()).isEqualTo(rightValue);
}
@Test
void rightShouldThrowExceptionWhenGettingLeftValue() {
// Given
Either<String, Integer> either = Either.right(42);
// When/Then
assertThatThrownBy(either::getLeft)
.isInstanceOf(NoSuchElementException.class)
.hasMessage("This is Right");
}
@Test
void shouldApplyLeftFunctionForLeftInstanceInFold() {
// Given
Either<String, Integer> either = Either.left("error");
Function<String, String> leftFn = s -> "Left: " + s;
Function<Integer, String> rightFn = i -> "Right: " + i;
// When
String result = either.fold(leftFn, rightFn);
// Then
assertThat(result).isEqualTo("Left: error");
}
@Test
void shouldApplyRightFunctionForRightInstanceInFold() {
// Given
Either<String, Integer> either = Either.right(42);
Function<String, String> leftFn = s -> "Left: " + s;
Function<Integer, String> rightFn = i -> "Right: " + i;
// When
String result = either.fold(leftFn, rightFn);
// Then
assertThat(result).isEqualTo("Right: 42");
}
@Test
void shouldHandleNullReturnValuesInFold() {
// Given
Either<String, Integer> leftEither = Either.left("error");
Either<String, Integer> rightEither = Either.right(42);
// When
String leftResult = leftEither.fold(s -> null, i -> "not null");
String rightResult = rightEither.fold(s -> "not null", i -> null);
// Then
assertThat(leftResult).isNull();
assertThat(rightResult).isNull();
}
@Test
void leftProjectionShouldExistForLeftInstance() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either.LeftProjection<String, Integer> projection = either.left();
// Then
assertThat(projection.exists()).isTrue();
assertThat(projection.get()).isEqualTo("error");
}
@Test
void leftProjectionShouldNotExistForRightInstance() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either.LeftProjection<String, Integer> projection = either.left();
// Then
assertThat(projection.exists()).isFalse();
assertThatThrownBy(projection::get)
.isInstanceOf(NoSuchElementException.class)
.hasMessage("This is Right");
}
@Test
void leftProjectionMapShouldTransformLeftValue() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either<Integer, Integer> result = either.left().map(String::length);
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo(5);
}
@Test
void leftProjectionMapShouldPreserveRightValue() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either<Integer, Integer> result = either.left().map(String::length);
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo(42);
}
@Test
void leftProjectionFlatMapShouldTransformLeftValue() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either<Integer, Integer> result = either.left().flatMap(s -> Either.left(s.length()));
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo(5);
}
@Test
void leftProjectionFlatMapShouldPreserveRightValue() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either<Integer, Integer> result = either.left().flatMap(s -> Either.left(s.length()));
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo(42);
}
@Test
void leftProjectionFlatMapCanReturnRight() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either<String, Integer> result = either.left().flatMap(s -> Either.right(999));
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo(999);
}
@Test
void leftProjectionToOptionalShouldReturnPresentForLeft() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Optional<String> optional = either.left().toOptional();
// Then
assertThat(optional).isPresent();
assertThat(optional.get()).isEqualTo("error");
}
@Test
void leftProjectionToOptionalShouldReturnEmptyForRight() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Optional<String> optional = either.left().toOptional();
// Then
assertThat(optional).isEmpty();
}
@Test
void leftProjectionConstructorShouldThrowForNullEither() {
// When/Then
assertThatThrownBy(() -> new Either.LeftProjection<>(null))
.isInstanceOf(NullPointerException.class)
.hasMessage("either can't be null");
}
@Test
void rightProjectionShouldExistForRightInstance() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either.RightProjection<String, Integer> projection = either.right();
// Then
assertThat(projection.exists()).isTrue();
assertThat(projection.get()).isEqualTo(42);
}
@Test
void rightProjectionShouldNotExistForLeftInstance() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either.RightProjection<String, Integer> projection = either.right();
// Then
assertThat(projection.exists()).isFalse();
assertThatThrownBy(projection::get)
.isInstanceOf(NoSuchElementException.class)
.hasMessage("This is Left");
}
@Test
void rightProjectionMapShouldTransformRightValue() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either<String, String> result = either.right().map(Object::toString);
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo("42");
}
@Test
void rightProjectionMapShouldPreserveLeftValue() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either<String, String> result = either.right().map(Object::toString);
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo("error");
}
@Test
void rightProjectionFlatMapShouldTransformRightValue() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either<String, String> result = either.right().flatMap(i -> Either.right(i.toString()));
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo("42");
}
@Test
void rightProjectionFlatMapShouldPreserveLeftValue() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Either<String, String> result = either.right().flatMap(i -> Either.right(i.toString()));
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo("error");
}
@Test
void rightProjectionFlatMapCanReturnLeft() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Either<String, Integer> result = either.right().flatMap(i -> Either.left("converted"));
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo("converted");
}
@Test
void rightProjectionToOptionalShouldReturnPresentForRight() {
// Given
Either<String, Integer> either = Either.right(42);
// When
Optional<Integer> optional = either.right().toOptional();
// Then
assertThat(optional).isPresent();
assertThat(optional.get()).isEqualTo(42);
}
@Test
void rightProjectionToOptionalShouldReturnEmptyForLeft() {
// Given
Either<String, Integer> either = Either.left("error");
// When
Optional<Integer> optional = either.right().toOptional();
// Then
assertThat(optional).isEmpty();
}
@Test
void rightProjectionConstructorShouldThrowForNullEither() {
// When/Then
assertThatThrownBy(() -> new Either.RightProjection<>(null))
.isInstanceOf(NullPointerException.class)
.hasMessage("either can't be null");
}
@Test
void shouldHandleNullValuesInTransformations() {
// Given
Either<String, Integer> leftEither = Either.left(null);
Either<String, Integer> rightEither = Either.right(null);
// When/Then
assertThat(leftEither.left().map(s -> s == null ? "was null" : s).getLeft())
.isEqualTo("was null");
assertThat(rightEither.right().map(i -> i == null ? "was null" : i.toString()).getRight())
.isEqualTo("was null");
}
@Test
void shouldHandleComplexTypeTransformations() {
// Given
Either<Exception, String> either = Either.right("hello world");
// When
Either<String, Integer> result = either
.left().map(Exception::getMessage)
.right().map(String::length);
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo(11);
}
@Test
void shouldChainTransformationsCorrectly() {
// Given
Either<String, Integer> either = Either.right(10);
// When
Either<String, String> result = either
.right().flatMap(i -> i > 5 ? Either.right(i * 2) : Either.left("too small"))
.right().map(i -> "Result: " + i);
// Then
assertThat(result.isRight()).isTrue();
assertThat(result.getRight()).isEqualTo("Result: 20");
}
@Test
void shouldHandleProjectionChainingWithErrorCases() {
// Given
Either<String, Integer> either = Either.right(3);
// When
Either<String, String> result = either
.right().flatMap(i -> i > 5 ? Either.right(i * 2) : Either.left("too small"))
.right().map(i -> "Result: " + i);
// Then
assertThat(result.isLeft()).isTrue();
assertThat(result.getLeft()).isEqualTo("too small");
}
}

View File

@@ -1,6 +1,9 @@
package io.kestra.core.utils;
import org.junit.Assert;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
import java.util.List;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
@@ -25,7 +28,7 @@ class EnumsTest {
@Test
void shouldThrowExceptionGivenInvalidString() {
Assertions.assertThrows(IllegalArgumentException.class, () -> {
assertThrows(IllegalArgumentException.class, () -> {
Enums.getForNameIgnoreCase("invalid", TestEnum.class);
});
}
@@ -49,11 +52,22 @@ class EnumsTest {
String invalidValue = "invalidValue";
// Act & Assert
IllegalArgumentException exception = Assert.assertThrows(IllegalArgumentException.class, () ->
IllegalArgumentException exception = assertThrows(IllegalArgumentException.class, () ->
Enums.fromString(invalidValue, mapping, "TestEnumWithValue")
);
}
@Test
void should_get_from_list(){
assertThat(Enums.fromList(List.of(TestEnum.ENUM1, TestEnum.ENUM2), TestEnum.class)).isEqualTo(List.of(TestEnum.ENUM1, TestEnum.ENUM2));
assertThat(Enums.fromList(List.of("ENUM1", "ENUM2"), TestEnum.class)).isEqualTo(List.of(TestEnum.ENUM1, TestEnum.ENUM2));
assertThat(Enums.fromList(TestEnum.ENUM1, TestEnum.class)).isEqualTo(List.of(TestEnum.ENUM1));
assertThat(Enums.fromList("ENUM1", TestEnum.class)).isEqualTo(List.of(TestEnum.ENUM1));
assertThrows(IllegalArgumentException.class, () -> Enums.fromList(List.of("string1", "string2"), TestEnum.class));
assertThrows(IllegalArgumentException.class, () -> Enums.fromList("non enum value", TestEnum.class));
}
enum TestEnum {
ENUM1, ENUM2
}

View File

@@ -6,6 +6,7 @@ import java.util.Collections;
import java.util.List;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
class ListUtilsTest {
@@ -36,4 +37,19 @@ class ListUtilsTest {
assertThat(ListUtils.concat(list1, null)).isEqualTo(List.of("1", "2"));
assertThat(ListUtils.concat(null, list2)).isEqualTo(List.of("3", "4"));
}
@Test
void convertToList(){
assertThat(ListUtils.convertToList(List.of(1, 2, 3))).isEqualTo(List.of(1, 2, 3));
assertThrows(IllegalArgumentException.class, () -> ListUtils.convertToList("not a list"));
}
@Test
void convertToListString(){
assertThat(ListUtils.convertToListString(List.of("string1", "string2"))).isEqualTo(List.of("string1", "string2"));
assertThat(ListUtils.convertToListString(List.of())).isEqualTo(List.of());
assertThrows(IllegalArgumentException.class, () -> ListUtils.convertToListString("not a list"));
assertThrows(IllegalArgumentException.class, () -> ListUtils.convertToListString(List.of(1, 2, 3)));
}
}

View File

@@ -9,6 +9,7 @@ import java.util.List;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertThrows;
class MapUtilsTest {
@SuppressWarnings("unchecked")
@@ -194,4 +195,23 @@ class MapUtilsTest {
assertThat(results).hasSize(1);
// due to ordering change on each JVM restart, the result map would be different as different entries will be skipped
}
@Test
void shouldFlattenANestedMap() {
Map<String, Object> results = MapUtils.nestedToFlattenMap(Map.of("k1",Map.of("k2", Map.of("k3", "v1")), "k4", "v2"));
assertThat(results).hasSize(2);
assertThat(results).containsAllEntriesOf(Map.of(
"k1.k2.k3", "v1",
"k4", "v2"
));
}
@Test
void shouldThrowIfNestedMapContainsMultipleEntries() {
var exception = assertThrows(IllegalArgumentException.class,
() -> MapUtils.nestedToFlattenMap(Map.of("k1", Map.of("k2", Map.of("k3", "v1"), "k4", "v2")))
);
assertThat(exception.getMessage()).isEqualTo("You cannot flatten a map with an entry that is a map of more than one element, conflicting key: k1");
}
}

View File

@@ -0,0 +1,77 @@
package io.kestra.core.validations;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.exceptions.BeanInstantiationException;
import org.junit.jupiter.api.Test;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThatCode;
import static org.assertj.core.api.Assertions.assertThatThrownBy;
class AppConfigValidatorTest {
@Test
void validateNoKestraUrl() {
assertThatCode(() -> {
try (ApplicationContext context = ApplicationContext.run()) {
context.getBean(AppConfigValidator.class);
}
})
.as("The bean got initialized properly including the PostConstruct validation")
.doesNotThrowAnyException();
}
@Test
void validateValidKestraUrl() {
assertThatCode(() -> {
try (ApplicationContext context = ApplicationContext.builder()
.deduceEnvironment(false)
.properties(
Map.of("kestra.url", "https://postgres-oss.preview.dev.kestra.io")
)
.start()
) {
context.getBean(AppConfigValidator.class);
}
})
.as("The bean got initialized properly including the PostConstruct validation")
.doesNotThrowAnyException();
}
@Test
void validateInvalidKestraUrl() {
assertThatThrownBy(() -> {
try (ApplicationContext context = ApplicationContext.builder()
.deduceEnvironment(false)
.properties(
Map.of("kestra.url", "postgres-oss.preview.dev.kestra.io")
)
.start()
) {
context.getBean(AppConfigValidator.class);
}
})
.as("The bean initialization failed at PostConstruct")
.isInstanceOf(BeanInstantiationException.class)
.hasMessageContaining("Invalid configuration");
}
@Test
void validateNonHttpKestraUrl() {
assertThatThrownBy(() -> {
try (ApplicationContext context = ApplicationContext.builder()
.deduceEnvironment(false)
.properties(
Map.of("kestra.url", "ftp://postgres-oss.preview.dev.kestra.io")
)
.start()
) {
context.getBean(AppConfigValidator.class);
}
})
.as("The bean initialization failed at PostConstruct")
.isInstanceOf(BeanInstantiationException.class)
.hasMessageContaining("Invalid configuration");
}
}

View File

@@ -12,8 +12,8 @@ import org.junit.jupiter.api.Test;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Map;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
@KestraTest
@@ -33,6 +33,7 @@ class PurgeExecutionsTest {
.id(IdUtils.create())
.namespace(namespace)
.flowId(flowId)
.tenantId(MAIN_TENANT)
.state(new State().withState(State.Type.SUCCESS))
.build();
executionRepository.save(execution);
@@ -42,7 +43,7 @@ class PurgeExecutionsTest {
.namespace(Property.ofValue(namespace))
.endDate(Property.ofValue(ZonedDateTime.now().plusMinutes(1).format(DateTimeFormatter.ISO_ZONED_DATE_TIME)))
.build();
var runContext = runContextFactory.of(Map.of("flow", Map.of("namespace", namespace, "id", flowId)));
var runContext = runContextFactory.of(flowId, namespace);
var output = purge.run(runContext);
assertThat(output.getExecutionsCount()).isEqualTo(1);
@@ -58,6 +59,7 @@ class PurgeExecutionsTest {
.namespace(namespace)
.flowId(flowId)
.id(IdUtils.create())
.tenantId(MAIN_TENANT)
.state(new State().withState(State.Type.SUCCESS))
.build();
executionRepository.save(execution);
@@ -68,7 +70,7 @@ class PurgeExecutionsTest {
.flowId(Property.ofValue(flowId))
.endDate(Property.ofValue(ZonedDateTime.now().plusMinutes(1).format(DateTimeFormatter.ISO_ZONED_DATE_TIME)))
.build();
var runContext = runContextFactory.of(Map.of("flow", Map.of("namespace", namespace, "id", flowId)));
var runContext = runContextFactory.of(flowId, namespace);
var output = purge.run(runContext);
assertThat(output.getExecutionsCount()).isEqualTo(1);

View File

@@ -372,6 +372,51 @@ public class ForEachItemCaseTest {
assertThat(correlationId.get().value()).isEqualTo(execution.getId());
}
public void forEachItemWithAfterExecution() throws TimeoutException, InterruptedException, URISyntaxException, IOException, QueueException {
CountDownLatch countDownLatch = new CountDownLatch(26);
AtomicReference<Execution> triggered = new AtomicReference<>();
Flux<Execution> receive = TestsUtils.receive(executionQueue, either -> {
Execution execution = either.getLeft();
if (execution.getFlowId().equals("for-each-item-subflow-after-execution") && execution.getState().getCurrent().isTerminated()) {
triggered.set(execution);
countDownLatch.countDown();
}
});
URI file = storageUpload();
Map<String, Object> inputs = Map.of("file", file.toString(), "batch", 4);
Execution execution = runnerUtils.runOne(MAIN_TENANT, TEST_NAMESPACE, "for-each-item-after-execution", null,
(flow, execution1) -> flowIO.readExecutionInputs(flow, execution1, inputs),
Duration.ofSeconds(30));
// we should have triggered 26 subflows
assertThat(countDownLatch.await(1, TimeUnit.MINUTES)).isTrue();
receive.blockLast();
// assert on the main flow execution
assertThat(execution.getTaskRunList()).hasSize(5);
assertThat(execution.getTaskRunList().get(2).getAttempts()).hasSize(1);
assertThat(execution.getTaskRunList().get(2).getAttempts().getFirst().getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(execution.getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
Map<String, Object> outputs = execution.getTaskRunList().get(2).getOutputs();
assertThat(outputs.get("numberOfBatches")).isEqualTo(26);
assertThat(outputs.get("iterations")).isNotNull();
Map<String, Integer> iterations = (Map<String, Integer>) outputs.get("iterations");
assertThat(iterations.get("CREATED")).isZero();
assertThat(iterations.get("RUNNING")).isZero();
assertThat(iterations.get("SUCCESS")).isEqualTo(26);
// assert on the last subflow execution
assertThat(triggered.get().getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(triggered.get().getFlowId()).isEqualTo("for-each-item-subflow-after-execution");
assertThat((String) triggered.get().getInputs().get("items")).matches("kestra:///io/kestra/tests/for-each-item-after-execution/executions/.*/tasks/each-split/.*\\.txt");
assertThat(triggered.get().getTaskRunList()).hasSize(2);
Optional<Label> correlationId = triggered.get().getLabels().stream().filter(label -> label.key().equals(Label.CORRELATION_ID)).findAny();
assertThat(correlationId.isPresent()).isTrue();
assertThat(correlationId.get().value()).isEqualTo(execution.getId());
}
private URI storageUpload() throws URISyntaxException, IOException {
File tempFile = File.createTempFile("file", ".txt");

View File

@@ -58,4 +58,15 @@ class ParallelTest {
assertThat(execution.findTaskRunsByTaskId("a2").getFirst().getState().getStartDate().isAfter(execution.findTaskRunsByTaskId("a1").getFirst().getState().getEndDate().orElseThrow())).isTrue();
assertThat(execution.findTaskRunsByTaskId("e2").getFirst().getState().getStartDate().isAfter(execution.findTaskRunsByTaskId("e1").getFirst().getState().getEndDate().orElseThrow())).isTrue();
}
@Test
@ExecuteFlow("flows/valids/parallel-fail-with-flowable.yaml")
void parallelFailWithFlowable(Execution execution) {
assertThat(execution.getState().getCurrent()).isEqualTo(State.Type.FAILED);
assertThat(execution.getTaskRunList()).hasSize(5);
// all tasks must be terminated except the Sleep that will ends later as everything is concurrent
execution.getTaskRunList().stream()
.filter(taskRun -> !"sleep".equals(taskRun.getTaskId()))
.forEach(run -> assertThat(run.getState().isTerminated()).isTrue());
}
}

View File

@@ -4,16 +4,24 @@ import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.junit.annotations.LoadFlows;
import io.kestra.core.models.Label;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.flows.State;
import io.kestra.core.queues.QueueException;
import io.kestra.core.queues.QueueFactoryInterface;
import io.kestra.core.queues.QueueInterface;
import io.kestra.core.repositories.ExecutionRepositoryInterface;
import io.kestra.core.runners.RunnerUtils;
import jakarta.inject.Inject;
import jakarta.inject.Named;
import org.junit.jupiter.api.Test;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.TimeoutException;
import java.util.concurrent.atomic.AtomicReference;
import static io.kestra.core.tenant.TenantService.MAIN_TENANT;
import static org.assertj.core.api.Assertions.assertThat;
import static org.junit.jupiter.api.Assertions.assertTrue;
@KestraTest(startRunner = true)
class SubflowRunnerTest {
@@ -24,6 +32,10 @@ class SubflowRunnerTest {
@Inject
private ExecutionRepositoryInterface executionRepository;
@Inject
@Named(QueueFactoryInterface.EXECUTION_NAMED)
protected QueueInterface<Execution> executionQueue;
@Test
@LoadFlows({"flows/valids/subflow-inherited-labels-child.yaml", "flows/valids/subflow-inherited-labels-parent.yaml"})
void inheritedLabelsAreOverridden() throws QueueException, TimeoutException {
@@ -50,4 +62,29 @@ class SubflowRunnerTest {
new Label("parentFlowLabel2", "value2") // inherited from the parent flow
);
}
@Test
@LoadFlows({"flows/valids/subflow-parent-no-wait.yaml", "flows/valids/subflow-child-with-output.yaml"})
void subflowOutputWithoutWait() throws QueueException, TimeoutException, InterruptedException {
AtomicReference<Execution> childExecution = new AtomicReference<>();
CountDownLatch countDownLatch = new CountDownLatch(1);
Runnable closing = executionQueue.receive(either -> {
if (either.isLeft() && either.getLeft().getFlowId().equals("subflow-child-with-output") && either.getLeft().getState().isTerminated()) {
childExecution.set(either.getLeft());
countDownLatch.countDown();
}
});
Execution parentExecution = runnerUtils.runOne(MAIN_TENANT, "io.kestra.tests", "subflow-parent-no-wait");
String childExecutionId = (String) parentExecution.findTaskRunsByTaskId("subflow").getFirst().getOutputs().get("executionId");
assertThat(childExecutionId).isNotBlank();
assertThat(parentExecution.getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(parentExecution.getTaskRunList()).hasSize(1);
assertTrue(countDownLatch.await(10, TimeUnit.SECONDS));
assertThat(childExecution.get().getId()).isEqualTo(childExecutionId);
assertThat(childExecution.get().getState().getCurrent()).isEqualTo(State.Type.SUCCESS);
assertThat(childExecution.get().getTaskRunList()).hasSize(1);
closing.run();
}
}

View File

@@ -1,9 +1,9 @@
package io.kestra.plugin.core.kv;
import io.kestra.core.context.TestRunContextFactory;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.models.property.Property;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.kv.KVStore;
import io.kestra.core.storages.kv.KVValueAndMetadata;
import io.kestra.core.utils.IdUtils;
@@ -21,19 +21,16 @@ class DeleteTest {
static final String TEST_KV_KEY = "test-key";
@Inject
RunContextFactory runContextFactory;
TestRunContextFactory runContextFactory;
@Test
void shouldOutputTrueGivenExistingKey() throws Exception {
// Given
String namespaceId = "io.kestra." + IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespaceId),
"inputs", Map.of(
RunContext runContext = this.runContextFactory.of(namespaceId, Map.of("inputs", Map.of(
"key", TEST_KV_KEY,
"namespace", namespaceId
)
));
)));
Delete delete = Delete.builder()
.id(Delete.class.getSimpleName())
@@ -56,13 +53,10 @@ class DeleteTest {
void shouldOutputFalseGivenNonExistingKey() throws Exception {
// Given
String namespaceId = "io.kestra." + IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespaceId),
"inputs", Map.of(
"key", TEST_KV_KEY,
"namespace", namespaceId
)
));
RunContext runContext = this.runContextFactory.of(namespaceId, Map.of("inputs", Map.of(
"key", TEST_KV_KEY,
"namespace", namespaceId
)));
Delete delete = Delete.builder()
.id(Delete.class.getSimpleName())

View File

@@ -1,9 +1,9 @@
package io.kestra.plugin.core.kv;
import io.kestra.core.context.TestRunContextFactory;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.models.property.Property;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.kv.KVStore;
import io.kestra.core.storages.kv.KVValueAndMetadata;
import io.kestra.core.utils.IdUtils;
@@ -19,15 +19,13 @@ class GetKeysTest {
static final String TEST_KEY_PREFIX_TEST = "test";
@Inject
RunContextFactory runContextFactory;
TestRunContextFactory runContextFactory;
@Test
void shouldGetAllKeys() throws Exception {
// Given
String namespace = IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespace)
));
RunContext runContext = this.runContextFactory.of(namespace);
GetKeys getKeys = GetKeys.builder()
.id(GetKeys.class.getSimpleName())
@@ -50,12 +48,8 @@ class GetKeysTest {
void shouldGetKeysGivenMatchingPrefix() throws Exception {
// Given
String namespace = IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespace),
"inputs", Map.of(
"prefix", TEST_KEY_PREFIX_TEST
)
));
RunContext runContext = this.runContextFactory.of(namespace,
Map.of("inputs", Map.of("prefix", TEST_KEY_PREFIX_TEST)));
GetKeys getKeys = GetKeys.builder()
.id(GetKeys.class.getSimpleName())
@@ -79,12 +73,8 @@ class GetKeysTest {
void shouldGetNoKeysGivenEmptyKeyStore() throws Exception {
// Given
String namespace = IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespace),
"inputs", Map.of(
"prefix", TEST_KEY_PREFIX_TEST
)
));
RunContext runContext = this.runContextFactory.of(namespace,
Map.of("inputs", Map.of("prefix", TEST_KEY_PREFIX_TEST)));
GetKeys getKeys = GetKeys.builder()
.id(GetKeys.class.getSimpleName())

View File

@@ -1,9 +1,9 @@
package io.kestra.plugin.core.kv;
import io.kestra.core.context.TestRunContextFactory;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.models.property.Property;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.kv.KVStore;
import io.kestra.core.storages.kv.KVValueAndMetadata;
import io.kestra.core.utils.IdUtils;
@@ -24,15 +24,13 @@ class GetTest {
static final String TEST_KV_KEY = "test-key";
@Inject
RunContextFactory runContextFactory;
TestRunContextFactory runContextFactory;
@Test
void shouldGetGivenExistingKey() throws Exception {
// Given
String namespaceId = "io.kestra." + IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespaceId),
"inputs", Map.of(
RunContext runContext = this.runContextFactory.of(namespaceId, Map.of("inputs", Map.of(
"key", TEST_KV_KEY,
"namespace", namespaceId
)
@@ -62,8 +60,7 @@ class GetTest {
void shouldGetGivenExistingKeyWithInheritance() throws Exception {
// Given
String namespaceId = "io.kestra." + IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespaceId),
RunContext runContext = this.runContextFactory.of(namespaceId, Map.of(
"inputs", Map.of(
"key", TEST_KV_KEY
)
@@ -92,8 +89,7 @@ class GetTest {
void shouldGetGivenNonExistingKey() throws Exception {
// Given
String namespaceId = "io.kestra." + IdUtils.create();
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", namespaceId),
RunContext runContext = this.runContextFactory.of(namespaceId, Map.of(
"inputs", Map.of(
"key", TEST_KV_KEY,
"namespace", namespaceId

View File

@@ -1,10 +1,10 @@
package io.kestra.plugin.core.kv;
import io.kestra.core.context.TestRunContextFactory;
import io.kestra.core.junit.annotations.KestraTest;
import io.kestra.core.models.kv.KVType;
import io.kestra.core.models.property.Property;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.storages.kv.KVStore;
import io.kestra.core.storages.kv.KVStoreException;
@@ -31,7 +31,7 @@ class SetTest {
StorageInterface storageInterface;
@Inject
RunContextFactory runContextFactory;
TestRunContextFactory runContextFactory;
@Test
void shouldSetKVGivenNoNamespace() throws Exception {
@@ -65,8 +65,7 @@ class SetTest {
@Test
void shouldSetKVGivenSameNamespace() throws Exception {
// Given
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", "io.kestra.test"),
RunContext runContext = this.runContextFactory.of("io.kestra.test", Map.of(
"inputs", Map.of(
"key", TEST_KEY,
"value", "test-value"
@@ -93,8 +92,7 @@ class SetTest {
@Test
void shouldSetKVGivenChildNamespace() throws Exception {
// Given
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", "io.kestra.test"),
RunContext runContext = this.runContextFactory.of("io.kestra.test", Map.of(
"inputs", Map.of(
"key", TEST_KEY,
"value", "test-value"
@@ -120,8 +118,7 @@ class SetTest {
@Test
void shouldFailGivenNonExistingNamespace() {
// Given
RunContext runContext = this.runContextFactory.of(Map.of(
"flow", Map.of("namespace", "io.kestra.test"),
RunContext runContext = this.runContextFactory.of("io.kestra.test", Map.of(
"inputs", Map.of(
"key", TEST_KEY,
"value", "test-value"

Some files were not shown because too many files have changed in this diff Show More