Compare commits

..

61 Commits

Author SHA1 Message Date
github-actions[bot]
ccf9d9b303 chore(version): update to version '1.1.4' 2025-11-18 13:10:38 +00:00
Miloš Paunović
25dbdbd713 chore(core): improve handling of local and cdn-loaded fonts (#13020)
Related to https://github.com/kestra-io/kestra/pull/11448#issuecomment-3510236629.

Closes https://github.com/kestra-io/kestra/issues/13019.
2025-11-18 13:26:32 +01:00
Loïc Mathieu
d54477051f fix(execution): use jdbcRepository.findOne to be tolerant of multiple results
It uses findAny() under the cover which does not throw if more than one result is returned.

Fixes #12943
2025-11-18 10:23:55 +01:00
Florian Hussonnois
54a63d1b04 fix(scheduler): mysql convert 'now' to UTC to avoid any offset error on next_execution_date
Fixed a previous commit to only apply the change for MySQL

Related-to: kestra-io/kestra-ee#5611
2025-11-18 09:59:12 +01:00
YannC
6f271e5694 feat: add annotation for multipart body on resumeExecution to have it inside SDK (#13003) 2025-11-18 09:38:28 +01:00
YannC
0a718dab30 feat: allows importFlows endpoint to be able to throw when having an invalid flow (#12995) 2025-11-18 09:38:28 +01:00
Piyush Bhaskar
ec522a6d44 fix(core): add resize observer for editor container (#12991) 2025-11-17 13:55:32 +05:30
Loïc Mathieu
ad73a46b0c fix(flow): flow trigger with both conditions and preconditions
When a flow have both a condition and a precondition, the condition was evaluated twice which lead to double execution triggered.

Fixes
2025-11-14 18:12:11 +01:00
Piyush Bhaskar
ca56559c49 refactor(core): remove i18n console error (#12958) 2025-11-14 16:34:13 +05:30
Piyush Bhaskar
ed739ec257 fix(core): make the pagination work for ns executions (#12965) 2025-11-14 16:33:43 +05:30
Piyush Bhaskar
9effef9fcd fix(core): show data on page when label checked from another page (#12944) 2025-11-14 14:26:43 +05:30
Miloš Paunović
ffc61b2482 chore(core): count only direct dependencies for badge number (#12818)
Closes https://github.com/kestra-io/kestra/issues/12817.
2025-11-14 08:17:15 +01:00
github-actions[bot]
fbbc0824ff chore(version): update to version '1.1.3' 2025-11-13 13:42:46 +00:00
Loïc Mathieu
842b8d604b fix(flow): don't URLEncode the fileName inside the Download task
Also provide a `fileName` property that when set would override any filename from the content disposition in case it causes issues.
2025-11-13 11:12:43 +01:00
Loïc Mathieu
bd5ac06c5b fix(system): consume the trigger queue so it is properly cleaned
Fixes https://github.com/kestra-io/kestra/issues/11671
2025-11-13 11:12:34 +01:00
Barthélémy Ledoux
335fe1e88c fix(executions): simplify LabelInput usage in execution labels dialog (#12921)
Co-authored-by: Piyush Bhaskar <impiyush0012@gmail.com>
2025-11-13 14:38:19 +05:30
Piyush Bhaskar
5c52ab300a fix(flow): enhance error handling and validation for flow save operations (#12926) 2025-11-13 14:09:44 +05:30
Miloš Paunović
756069f1a6 fix(core): amend paths for consuming custom blueprints (#12925)
Closes https://github.com/kestra-io/kestra-ee/issues/5814.
2025-11-13 09:34:40 +01:00
Piyush Bhaskar
faba958f08 fix(core): adjust overflow behavior (#12879) 2025-11-13 13:59:01 +05:30
Piyush Bhaskar
a772a61d62 fix(core): update toast to use util (#12924) 2025-11-13 13:57:07 +05:30
Loïc Mathieu
f2cb79cb98 fix(system): access log configuration
Due to a change in the configuration file, access log configuration was in the wrong sub-document.

Fixes https://github.com/kestra-io/kestra-ee/issues/5670
2025-11-12 15:09:26 +01:00
Piyush Bhaskar
9ea0b1cebb fix(filters): conditionally include namespace/ flowId key based on route (#12840) 2025-11-12 13:58:02 +05:30
Piyush Bhaskar
867dc20d47 fix(core): handle potential null values for children (#12842) 2025-11-12 12:48:38 +05:30
Piyush Bhaskar
c669759afb fix(secrets): NS update for a secret should be disabled properly with correct prop (#12834) 2025-11-12 12:28:20 +05:30
Barthélémy Ledoux
7e3cd8a2cb fix: run validation when editing a dashboard (#12827) 2025-11-10 18:35:45 +01:00
YannC
f203c5f43a fix: where prop can be null (#12828) 2025-11-10 18:35:45 +01:00
github-actions[bot]
f4e90cc540 chore(version): update to version '1.1.2' 2025-11-10 14:36:53 +00:00
YannC
ce0fd58c94 fix: make sure datafilter is validated (#12822) 2025-11-10 13:29:59 +01:00
Loïc Mathieu
f1b950941c fix(executions): allow reading from subflow even if we have a parent
This fixes an issue where you cannot read from a Subflow file if the execution has iteself be triggered by another Subflow task.
It was caused by the trigger check beeing too aggressive, if it didn't pass the check it fail instead of return false so the other check would not be processed.

Fixes #12629
2025-11-10 13:26:44 +01:00
Piyush Bhaskar
559f3f2634 fix(core): bring the usage of restore url (#12762)
Co-authored-by: Bart Ledoux <bledoux@kestra.io>
2025-11-10 16:06:06 +05:30
YannC
9bc65b84f1 fix: when removing a queued execution, directly delete instead of fetching then delete to reduce deadlock (#12789) 2025-11-10 10:32:48 +01:00
Piyush Bhaskar
223b137381 fix(core): add defaults for component (#12814) 2025-11-10 15:02:20 +05:30
Piyush Bhaskar
80d1df6eeb fix(core): bulk deletion of executions (#12813) 2025-11-10 14:05:16 +05:30
Piyush Bhaskar
a87e7f3b8d fix(core): filter the minichart by duration from api which is 30D (#12740) 2025-11-10 13:58:33 +05:30
Loïc Mathieu
710862ef33 fix(executions): don't urlencode files as they would already be inside the storage 2025-11-10 09:28:04 +01:00
Miloš Paunović
d74f535ea1 chore(flows): amend flow export filename to include namespace and id parameters (#12800)
Closes https://github.com/kestra-io/kestra/issues/12790.
2025-11-07 13:58:16 +01:00
Piyush Bhaskar
1673f24356 fix(core): bring dashboard selector in navbar and also keep the selected dashboard route specific (#12703)
Co-authored-by: Bart Ledoux <bledoux@kestra.io>
2025-11-07 16:27:53 +05:30
brian-mulier-p
2ad90625b8 fix(tests): bump amount of threads on tests (#12777) 2025-11-07 09:44:19 +01:00
Piyush Bhaskar
e77b80a1a8 refactor(core): properly do trigger filter (#12780) 2025-11-07 11:35:59 +05:30
Ludovic DEHON
6223b1f672 feat(cli): add --flow-path on executor to preload some flows
close kestra-io/kestra-ee#5721
2025-11-06 19:26:17 +01:00
github-actions[bot]
23329f4d48 chore(version): update to version '1.1.1' 2025-11-06 17:18:11 +00:00
Loïc Mathieu
ed60cb6670 fix(core): relax assertion on ConcurrencyLimitServiceTest.findById() 2025-11-06 18:16:32 +01:00
brian-mulier-p
f6306883b4 fix(kv): all types properly handled and avoid trimming string KV values (#12765)
closes https://github.com/kestra-io/kestra-ee/issues/5718
2025-11-06 15:47:44 +01:00
Loïc Mathieu
89433dc04c fix(system): killing a paused flow should kill the Pause task attempt
Fixes #12421
2025-11-06 15:33:56 +01:00
Loïc Mathieu
4837408c59 chore(test): try to un-flaky ConcurrencyLimitServiceTest.findById().
By making sure the unqueueExecution() test wait for the unqueued execution to ends to avoid any potential races.
2025-11-06 15:33:56 +01:00
Miloš Paunović
5a8c36caa5 fix(variables): properly send kv value when the type is json (#12759)
Closes https://github.com/kestra-io/kestra/issues/12739.
2025-11-06 15:33:56 +01:00
Piyush Bhaskar
a2335abc0c fix(core): make the interval in triggers work (#12764) 2025-11-06 19:39:10 +05:30
Piyush Bhaskar
310a7bbbe9 Revert "fix(core): apply timeRange filter in triggers (#12721)" 2025-11-06 18:56:37 +05:30
Jay Balwani
162feaf38c Fix(UI)/kv type boolean (#12643)
Co-authored-by: Miloš Paunović <paun992@hotmail.com>
Co-authored-by: Piyush Bhaskar <impiyush0012@gmail.com>
2025-11-06 16:37:01 +05:30
Piyush Bhaskar
94050be49c fix(core): apply timeRange filter in triggers (#12721) 2025-11-06 16:29:47 +05:30
brian-mulier-p
848a5ac9d7 fix(cli): metadata commands weren't working with external storages (#12743)
closes #12713
2025-11-06 11:47:59 +01:00
Barthélémy Ledoux
9ac7a9ce9a fix: responsive dashboard grid (#12608) 2025-11-06 11:03:26 +01:00
Piyush Bhaskar
c42838f3e1 feat(ui): persist scroll across No‑code, editor tabs, and docs via Pinia view-state and scroll-memory (#12358) 2025-11-06 11:53:07 +05:30
Irfan
c499d62b63 fix(core): going back from plugin doc will take to plugins home (#12621)
Co-authored-by: iitzIrFan <irfanlhawk@gmail.com>
Co-authored-by: Piyush Bhaskar <impiyush0012@gmail.com>
2025-11-06 11:50:34 +05:30
Piyush Bhaskar
8fbc62e12c fix(core): proper deletion of single and multi ns files (#12618) 2025-11-06 11:49:51 +05:30
Vipin Chandra Sao
ae143f29f4 fix(ui): prevent "Invalid date" display in Gantt view for executions … (#12605)
* fix(ui): prevent "Invalid date" display in Gantt view for executions that never started

- Added defensive checks wherever histories arrays might be empty
- Now renders blank or safe values instead of "Invalid date"
- Improved comments for maintainability and future debugging
- Addresses issue #12583

* revert the changes

* fix: remove the card when invalid date

---------

Co-authored-by: Piyush Bhaskar <impiyush0012@gmail.com>
Co-authored-by: Piyush Bhaskar <102078527+Piyush-r-bhaskar@users.noreply.github.com>
2025-11-06 11:49:02 +05:30
Piyush Bhaskar
e4a11fc9ce fix(core): remove double info icon (#12623) 2025-11-06 11:48:36 +05:30
Piyush Bhaskar
ebacfc70b9 fix(core): use proper option after P30D in misc (#12624) 2025-11-06 11:47:57 +05:30
Loïc Mathieu
5bf67180a3 fix(system): trigger an execution once per condition on flow triggers
Fixes #12560
2025-11-05 15:31:41 +01:00
Roman Acevedo
1e670b5e7e test(kv): plain text header is sent now 2025-11-04 15:17:02 +01:00
brian.mulier
0dacad5ee1 chore(version): upgrade to v1.1.0 2025-11-04 13:58:32 +01:00
480 changed files with 13931 additions and 7313 deletions

View File

@@ -7,6 +7,7 @@ import io.kestra.cli.commands.namespaces.NamespaceCommand;
import io.kestra.cli.commands.plugins.PluginCommand;
import io.kestra.cli.commands.servers.ServerCommand;
import io.kestra.cli.commands.sys.SysCommand;
import io.kestra.cli.commands.templates.TemplateCommand;
import io.micronaut.configuration.picocli.MicronautFactory;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
@@ -38,6 +39,7 @@ import java.util.concurrent.Callable;
PluginCommand.class,
ServerCommand.class,
FlowCommand.class,
TemplateCommand.class,
SysCommand.class,
ConfigCommand.class,
NamespaceCommand.class,

View File

@@ -4,7 +4,6 @@ import io.kestra.core.runners.*;
import io.kestra.core.server.Service;
import io.kestra.core.utils.Await;
import io.kestra.core.utils.ExecutorsUtils;
import io.kestra.executor.DefaultExecutor;
import io.kestra.worker.DefaultWorker;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.annotation.Value;
@@ -50,7 +49,7 @@ public class StandAloneRunner implements Runnable, AutoCloseable {
running.set(true);
poolExecutor = executorsUtils.cachedThreadPool("standalone-runner");
poolExecutor.execute(applicationContext.getBean(DefaultExecutor.class));
poolExecutor.execute(applicationContext.getBean(ExecutorInterface.class));
if (workerEnabled) {
// FIXME: For backward-compatibility with Kestra 0.15.x and earliest we still used UUID for Worker ID instead of IdUtils

View File

@@ -0,0 +1,36 @@
package io.kestra.cli.commands.flows;
import io.kestra.cli.AbstractCommand;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.models.validations.ModelValidator;
import io.kestra.core.serializers.YamlParser;
import jakarta.inject.Inject;
import picocli.CommandLine;
import java.nio.file.Files;
import java.nio.file.Path;
@CommandLine.Command(
name = "expand",
description = "Deprecated - expand a flow"
)
@Deprecated
public class FlowExpandCommand extends AbstractCommand {
@CommandLine.Parameters(index = "0", description = "The flow file to expand")
private Path file;
@Inject
private ModelValidator modelValidator;
@Override
public Integer call() throws Exception {
super.call();
stdErr("Warning, this functionality is deprecated and will be removed at some point.");
String content = IncludeHelperExpander.expand(Files.readString(file), file.getParent());
Flow flow = YamlParser.parse(content, Flow.class);
modelValidator.validate(flow);
stdOut(content);
return 0;
}
}

View File

@@ -21,8 +21,6 @@ import java.nio.file.Files;
import java.nio.file.Path;
import java.util.List;
import static io.kestra.core.utils.Rethrow.throwFunction;
@CommandLine.Command(
name = "updates",
description = "Create or update flows from a folder, and optionally delete the ones not present",
@@ -43,6 +41,7 @@ public class FlowUpdatesCommand extends AbstractApiCommand {
@Inject
private TenantIdSelectorService tenantIdSelectorService;
@SuppressWarnings("deprecation")
@Override
public Integer call() throws Exception {
super.call();
@@ -51,7 +50,13 @@ public class FlowUpdatesCommand extends AbstractApiCommand {
List<String> flows = files
.filter(Files::isRegularFile)
.filter(YamlParser::isValidExtension)
.map(throwFunction(path -> Files.readString(path, Charset.defaultCharset())))
.map(path -> {
try {
return IncludeHelperExpander.expand(Files.readString(path, Charset.defaultCharset()), path.getParent());
} catch (IOException e) {
throw new RuntimeException(e);
}
})
.toList();
String body = "";

View File

@@ -0,0 +1,40 @@
package io.kestra.cli.commands.flows;
import com.google.common.io.Files;
import lombok.SneakyThrows;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.file.Path;
import java.util.List;
import java.util.stream.Collectors;
@Deprecated
public abstract class IncludeHelperExpander {
public static String expand(String value, Path directory) throws IOException {
return value.lines()
.map(line -> line.contains("[[>") && line.contains("]]") ? expandLine(line, directory) : line)
.collect(Collectors.joining("\n"));
}
@SneakyThrows
private static String expandLine(String line, Path directory) {
String prefix = line.substring(0, line.indexOf("[[>"));
String suffix = line.substring(line.indexOf("]]") + 2, line.length());
String file = line.substring(line.indexOf("[[>") + 3 , line.indexOf("]]")).strip();
Path includePath = directory.resolve(file);
List<String> include = Files.readLines(includePath.toFile(), Charset.defaultCharset());
// handle single line directly with the suffix (should be between quotes or double-quotes
if(include.size() == 1) {
String singleInclude = include.getFirst();
return prefix + singleInclude + suffix;
}
// multi-line will be expanded with the prefix but no suffix
return include.stream()
.map(includeLine -> prefix + includeLine)
.collect(Collectors.joining("\n"));
}
}

View File

@@ -2,6 +2,7 @@ package io.kestra.cli.commands.flows.namespaces;
import io.kestra.cli.AbstractValidateCommand;
import io.kestra.cli.commands.AbstractServiceNamespaceUpdateCommand;
import io.kestra.cli.commands.flows.IncludeHelperExpander;
import io.kestra.cli.services.TenantIdSelectorService;
import io.kestra.core.serializers.YamlParser;
import io.micronaut.core.type.Argument;
@@ -20,8 +21,6 @@ import java.nio.charset.Charset;
import java.nio.file.Files;
import java.util.List;
import static io.kestra.core.utils.Rethrow.throwFunction;
@CommandLine.Command(
name = "update",
description = "Update flows in namespace",
@@ -45,7 +44,13 @@ public class FlowNamespaceUpdateCommand extends AbstractServiceNamespaceUpdateCo
List<String> flows = files
.filter(Files::isRegularFile)
.filter(YamlParser::isValidExtension)
.map(throwFunction(path -> Files.readString(path, Charset.defaultCharset())))
.map(path -> {
try {
return IncludeHelperExpander.expand(Files.readString(path, Charset.defaultCharset()), path.getParent());
} catch (IOException e) {
throw new RuntimeException(e);
}
})
.toList();
String body = "";

View File

@@ -2,6 +2,7 @@ package io.kestra.cli.commands.migrations.metadata;
import io.kestra.cli.AbstractCommand;
import jakarta.inject.Inject;
import jakarta.inject.Provider;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
@@ -12,13 +13,13 @@ import picocli.CommandLine;
@Slf4j
public class KvMetadataMigrationCommand extends AbstractCommand {
@Inject
private MetadataMigrationService metadataMigrationService;
private Provider<MetadataMigrationService> metadataMigrationServiceProvider;
@Override
public Integer call() throws Exception {
super.call();
try {
metadataMigrationService.kvMigration();
metadataMigrationServiceProvider.get().kvMigration();
} catch (Exception e) {
System.err.println("❌ KV Metadata migration failed: " + e.getMessage());
e.printStackTrace();

View File

@@ -2,6 +2,7 @@ package io.kestra.cli.commands.migrations.metadata;
import io.kestra.cli.AbstractCommand;
import jakarta.inject.Inject;
import jakarta.inject.Provider;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
@@ -12,13 +13,13 @@ import picocli.CommandLine;
@Slf4j
public class SecretsMetadataMigrationCommand extends AbstractCommand {
@Inject
private MetadataMigrationService metadataMigrationService;
private Provider<MetadataMigrationService> metadataMigrationServiceProvider;
@Override
public Integer call() throws Exception {
super.call();
try {
metadataMigrationService.secretMigration();
metadataMigrationServiceProvider.get().secretMigration();
} catch (Exception e) {
System.err.println("❌ Secrets Metadata migration failed: " + e.getMessage());
e.printStackTrace();

View File

@@ -1,8 +1,10 @@
package io.kestra.cli.commands.servers;
import com.google.common.collect.ImmutableMap;
import io.kestra.cli.services.TenantIdSelectorService;
import io.kestra.core.models.ServerType;
import io.kestra.core.runners.Executor;
import io.kestra.core.repositories.LocalFlowRepositoryLoader;
import io.kestra.core.runners.ExecutorInterface;
import io.kestra.core.services.SkipExecutionService;
import io.kestra.core.services.StartExecutorService;
import io.kestra.core.utils.Await;
@@ -10,6 +12,8 @@ import io.micronaut.context.ApplicationContext;
import jakarta.inject.Inject;
import picocli.CommandLine;
import java.io.File;
import java.io.IOException;
import java.util.Collections;
import java.util.List;
import java.util.Map;
@@ -19,6 +23,9 @@ import java.util.Map;
description = "Start the Kestra executor"
)
public class ExecutorCommand extends AbstractServerCommand {
@CommandLine.Spec
CommandLine.Model.CommandSpec spec;
@Inject
private ApplicationContext applicationContext;
@@ -28,22 +35,28 @@ public class ExecutorCommand extends AbstractServerCommand {
@Inject
private StartExecutorService startExecutorService;
@CommandLine.Option(names = {"--skip-executions"}, split=",", description = "The list of execution identifiers to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"-f", "--flow-path"}, description = "Tenant identifier required to load flows from the specified path")
private File flowPath;
@CommandLine.Option(names = "--tenant", description = "Tenant identifier, Required to load flows from path")
private String tenantId;
@CommandLine.Option(names = {"--skip-executions"}, split=",", description = "List of execution IDs to skip, separated by commas; for troubleshooting only")
private List<String> skipExecutions = Collections.emptyList();
@CommandLine.Option(names = {"--skip-flows"}, split=",", description = "The list of flow identifiers (tenant|namespace|flowId) to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-flows"}, split=",", description = "List of flow identifiers (tenant|namespace|flowId) to skip, separated by a coma; for troubleshooting only")
private List<String> skipFlows = Collections.emptyList();
@CommandLine.Option(names = {"--skip-namespaces"}, split=",", description = "The list of namespace identifiers (tenant|namespace) to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-namespaces"}, split=",", description = "List of namespace identifiers (tenant|namespace) to skip, separated by a coma; for troubleshooting only")
private List<String> skipNamespaces = Collections.emptyList();
@CommandLine.Option(names = {"--skip-tenants"}, split=",", description = "The list of tenants to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-tenants"}, split=",", description = "List of tenants to skip, separated by a coma; for troubleshooting only")
private List<String> skipTenants = Collections.emptyList();
@CommandLine.Option(names = {"--start-executors"}, split=",", description = "The list of Kafka Stream executors to start, separated by a command. Use it only with the Kafka queue, for debugging purpose.")
@CommandLine.Option(names = {"--start-executors"}, split=",", description = "List of Kafka Stream executors to start, separated by a command. Use it only with the Kafka queue; for debugging only")
private List<String> startExecutors = Collections.emptyList();
@CommandLine.Option(names = {"--not-start-executors"}, split=",", description = "The list of Kafka Stream executors to not start, separated by a command. Use it only with the Kafka queue, for debugging purpose.")
@CommandLine.Option(names = {"--not-start-executors"}, split=",", description = "Lst of Kafka Stream executors to not start, separated by a command. Use it only with the Kafka queue; for debugging only")
private List<String> notStartExecutors = Collections.emptyList();
@SuppressWarnings("unused")
@@ -64,7 +77,17 @@ public class ExecutorCommand extends AbstractServerCommand {
super.call();
Executor executorService = applicationContext.getBean(Executor.class);
if (flowPath != null) {
try {
LocalFlowRepositoryLoader localFlowRepositoryLoader = applicationContext.getBean(LocalFlowRepositoryLoader.class);
TenantIdSelectorService tenantIdSelectorService = applicationContext.getBean(TenantIdSelectorService.class);
localFlowRepositoryLoader.load(tenantIdSelectorService.getTenantId(this.tenantId), this.flowPath);
} catch (IOException e) {
throw new CommandLine.ParameterException(this.spec.commandLine(), "Invalid flow path", e);
}
}
ExecutorInterface executorService = applicationContext.getBean(ExecutorInterface.class);
executorService.run();
Await.until(() -> !this.applicationContext.isRunning());

View File

@@ -23,7 +23,7 @@ public class IndexerCommand extends AbstractServerCommand {
@Inject
private SkipExecutionService skipExecutionService;
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting only")
private List<String> skipIndexerRecords = Collections.emptyList();
@SuppressWarnings("unused")

View File

@@ -42,7 +42,7 @@ public class StandAloneCommand extends AbstractServerCommand {
@Nullable
private FileChangedEventListener fileWatcher;
@CommandLine.Option(names = {"-f", "--flow-path"}, description = "the flow path containing flow to inject at startup (when running with a memory flow repository)")
@CommandLine.Option(names = {"-f", "--flow-path"}, description = "Tenant identifier required to load flows from the specified path")
private File flowPath;
@CommandLine.Option(names = "--tenant", description = "Tenant identifier, Required to load flows from path with the enterprise edition")
@@ -51,19 +51,19 @@ public class StandAloneCommand extends AbstractServerCommand {
@CommandLine.Option(names = {"--worker-thread"}, description = "the number of worker threads, defaults to eight times the number of available processors. Set it to 0 to avoid starting a worker.")
private int workerThread = defaultWorkerThread();
@CommandLine.Option(names = {"--skip-executions"}, split=",", description = "a list of execution identifiers to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-executions"}, split=",", description = "a list of execution identifiers to skip, separated by a coma; for troubleshooting only")
private List<String> skipExecutions = Collections.emptyList();
@CommandLine.Option(names = {"--skip-flows"}, split=",", description = "a list of flow identifiers (namespace.flowId) to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-flows"}, split=",", description = "a list of flow identifiers (namespace.flowId) to skip, separated by a coma; for troubleshooting only")
private List<String> skipFlows = Collections.emptyList();
@CommandLine.Option(names = {"--skip-namespaces"}, split=",", description = "a list of namespace identifiers (tenant|namespace) to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-namespaces"}, split=",", description = "a list of namespace identifiers (tenant|namespace) to skip, separated by a coma; for troubleshooting only")
private List<String> skipNamespaces = Collections.emptyList();
@CommandLine.Option(names = {"--skip-tenants"}, split=",", description = "a list of tenants to skip, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-tenants"}, split=",", description = "a list of tenants to skip, separated by a coma; for troubleshooting only")
private List<String> skipTenants = Collections.emptyList();
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting only")
private List<String> skipIndexerRecords = Collections.emptyList();
@CommandLine.Option(names = {"--no-tutorials"}, description = "Flag to disable auto-loading of tutorial flows.")

View File

@@ -40,7 +40,7 @@ public class WebServerCommand extends AbstractServerCommand {
@Option(names = {"--no-indexer"}, description = "Flag to disable starting an embedded indexer.")
private boolean indexerDisabled = false;
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting purpose only")
@CommandLine.Option(names = {"--skip-indexer-records"}, split=",", description = "a list of indexer record keys, separated by a coma; for troubleshooting only")
private List<String> skipIndexerRecords = Collections.emptyList();
@Override

View File

@@ -7,7 +7,7 @@ import io.kestra.core.queues.QueueFactoryInterface;
import io.kestra.core.queues.QueueInterface;
import io.kestra.core.runners.ExecutionQueued;
import io.kestra.core.services.ConcurrencyLimitService;
import io.kestra.jdbc.runner.AbstractJdbcExecutionQueuedStateStore;
import io.kestra.jdbc.runner.AbstractJdbcExecutionQueuedStorage;
import io.micronaut.context.ApplicationContext;
import jakarta.inject.Inject;
import jakarta.inject.Named;
@@ -47,7 +47,7 @@ public class SubmitQueuedCommand extends AbstractCommand {
return 1;
}
else if (queueType.get().equals("postgres") || queueType.get().equals("mysql") || queueType.get().equals("h2")) {
var executionQueuedStorage = applicationContext.getBean(AbstractJdbcExecutionQueuedStateStore.class);
var executionQueuedStorage = applicationContext.getBean(AbstractJdbcExecutionQueuedStorage.class);
var concurrencyLimitService = applicationContext.getBean(ConcurrencyLimitService.class);
for (ExecutionQueued queued : executionQueuedStorage.getAllForAllTenants()) {

View File

@@ -1,6 +1,7 @@
package io.kestra.cli.commands.sys;
import io.kestra.cli.commands.sys.database.DatabaseCommand;
import io.kestra.cli.commands.sys.statestore.StateStoreCommand;
import io.micronaut.configuration.picocli.PicocliRunner;
import lombok.extern.slf4j.Slf4j;
import io.kestra.cli.AbstractCommand;
@@ -15,6 +16,7 @@ import picocli.CommandLine;
ReindexCommand.class,
DatabaseCommand.class,
SubmitQueuedCommand.class,
StateStoreCommand.class
}
)
@Slf4j

View File

@@ -0,0 +1,27 @@
package io.kestra.cli.commands.sys.statestore;
import io.kestra.cli.AbstractCommand;
import io.kestra.cli.App;
import io.micronaut.configuration.picocli.PicocliRunner;
import lombok.SneakyThrows;
import picocli.CommandLine;
@CommandLine.Command(
name = "state-store",
description = "Manage Kestra State Store",
mixinStandardHelpOptions = true,
subcommands = {
StateStoreMigrateCommand.class,
}
)
public class StateStoreCommand extends AbstractCommand {
@SneakyThrows
@Override
public Integer call() throws Exception {
super.call();
PicocliRunner.call(App.class, "sys", "state-store", "--help");
return 0;
}
}

View File

@@ -0,0 +1,81 @@
package io.kestra.cli.commands.sys.statestore;
import io.kestra.cli.AbstractCommand;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.StateStore;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.utils.Slugify;
import io.micronaut.context.ApplicationContext;
import jakarta.inject.Inject;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
import java.io.IOException;
import java.io.InputStream;
import java.net.URI;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Stream;
@CommandLine.Command(
name = "migrate",
description = "Migrate old state store files to use the new KV Store implementation.",
mixinStandardHelpOptions = true
)
@Slf4j
public class StateStoreMigrateCommand extends AbstractCommand {
@Inject
private ApplicationContext applicationContext;
@Override
public Integer call() throws Exception {
super.call();
FlowRepositoryInterface flowRepository = this.applicationContext.getBean(FlowRepositoryInterface.class);
StorageInterface storageInterface = this.applicationContext.getBean(StorageInterface.class);
RunContextFactory runContextFactory = this.applicationContext.getBean(RunContextFactory.class);
flowRepository.findAllForAllTenants().stream().map(flow -> Map.entry(flow, List.of(
URI.create("/" + flow.getNamespace().replace(".", "/") + "/" + Slugify.of(flow.getId()) + "/states"),
URI.create("/" + flow.getNamespace().replace(".", "/") + "/states")
))).map(potentialStateStoreUrisForAFlow -> Map.entry(potentialStateStoreUrisForAFlow.getKey(), potentialStateStoreUrisForAFlow.getValue().stream().flatMap(uri -> {
try {
return storageInterface.allByPrefix(potentialStateStoreUrisForAFlow.getKey().getTenantId(), potentialStateStoreUrisForAFlow.getKey().getNamespace(), uri, false).stream();
} catch (IOException e) {
return Stream.empty();
}
}).toList())).forEach(stateStoreFileUrisForAFlow -> stateStoreFileUrisForAFlow.getValue().forEach(stateStoreFileUri -> {
Flow flow = stateStoreFileUrisForAFlow.getKey();
String[] flowQualifierWithStateQualifiers = stateStoreFileUri.getPath().split("/states/");
String[] statesUriPart = flowQualifierWithStateQualifiers[1].split("/");
String stateName = statesUriPart[0];
String taskRunValue = statesUriPart.length > 2 ? statesUriPart[1] : null;
String stateSubName = statesUriPart[statesUriPart.length - 1];
boolean flowScoped = flowQualifierWithStateQualifiers[0].endsWith("/" + flow.getId());
StateStore stateStore = new StateStore(runContext(runContextFactory, flow), false);
try (InputStream is = storageInterface.get(flow.getTenantId(), flow.getNamespace(), stateStoreFileUri)) {
stateStore.putState(flowScoped, stateName, stateSubName, taskRunValue, is.readAllBytes());
storageInterface.delete(flow.getTenantId(), flow.getNamespace(), stateStoreFileUri);
} catch (IOException e) {
throw new RuntimeException(e);
}
}));
stdOut("Successfully ran the state-store migration.");
return 0;
}
private RunContext runContext(RunContextFactory runContextFactory, Flow flow) {
Map<String, String> flowVariables = new HashMap<>();
flowVariables.put("tenantId", flow.getTenantId());
flowVariables.put("id", flow.getId());
flowVariables.put("namespace", flow.getNamespace());
return runContextFactory.of(flow, Map.of("flow", flowVariables));
}
}

View File

@@ -0,0 +1,34 @@
package io.kestra.cli.commands.templates;
import io.kestra.cli.AbstractCommand;
import io.kestra.cli.App;
import io.kestra.cli.commands.templates.namespaces.TemplateNamespaceCommand;
import io.kestra.core.models.templates.TemplateEnabled;
import io.micronaut.configuration.picocli.PicocliRunner;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
@CommandLine.Command(
name = "template",
description = "Manage templates",
mixinStandardHelpOptions = true,
subcommands = {
TemplateNamespaceCommand.class,
TemplateValidateCommand.class,
TemplateExportCommand.class,
}
)
@Slf4j
@TemplateEnabled
public class TemplateCommand extends AbstractCommand {
@SneakyThrows
@Override
public Integer call() throws Exception {
super.call();
PicocliRunner.call(App.class, "template", "--help");
return 0;
}
}

View File

@@ -0,0 +1,61 @@
package io.kestra.cli.commands.templates;
import io.kestra.cli.AbstractApiCommand;
import io.kestra.cli.AbstractValidateCommand;
import io.kestra.cli.services.TenantIdSelectorService;
import io.kestra.core.models.templates.TemplateEnabled;
import io.micronaut.http.HttpRequest;
import io.micronaut.http.HttpResponse;
import io.micronaut.http.MediaType;
import io.micronaut.http.MutableHttpRequest;
import io.micronaut.http.client.exceptions.HttpClientResponseException;
import io.micronaut.http.client.netty.DefaultHttpClient;
import jakarta.inject.Inject;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
import java.nio.file.Files;
import java.nio.file.Path;
@CommandLine.Command(
name = "export",
description = "Export templates to a ZIP file",
mixinStandardHelpOptions = true
)
@Slf4j
@TemplateEnabled
public class TemplateExportCommand extends AbstractApiCommand {
private static final String DEFAULT_FILE_NAME = "templates.zip";
@Inject
private TenantIdSelectorService tenantService;
@CommandLine.Option(names = {"--namespace"}, description = "The namespace of templates to export")
public String namespace;
@CommandLine.Parameters(index = "0", description = "The directory to export the file to")
public Path directory;
@Override
public Integer call() throws Exception {
super.call();
try(DefaultHttpClient client = client()) {
MutableHttpRequest<Object> request = HttpRequest
.GET(apiUri("/templates/export/by-query", tenantService.getTenantId(tenantId)) + (namespace != null ? "?namespace=" + namespace : ""))
.accept(MediaType.APPLICATION_OCTET_STREAM);
HttpResponse<byte[]> response = client.toBlocking().exchange(this.requestOptions(request), byte[].class);
Path zipFile = Path.of(directory.toString(), DEFAULT_FILE_NAME);
zipFile.toFile().createNewFile();
Files.write(zipFile, response.body());
stdOut("Exporting template(s) for namespace '" + namespace + "' successfully done !");
} catch (HttpClientResponseException e) {
AbstractValidateCommand.handleHttpException(e, "template");
return 1;
}
return 0;
}
}

View File

@@ -0,0 +1,35 @@
package io.kestra.cli.commands.templates;
import io.kestra.cli.AbstractValidateCommand;
import io.kestra.core.models.templates.Template;
import io.kestra.core.models.templates.TemplateEnabled;
import io.kestra.core.models.validations.ModelValidator;
import jakarta.inject.Inject;
import picocli.CommandLine;
import java.util.Collections;
@CommandLine.Command(
name = "validate",
description = "Validate a template"
)
@TemplateEnabled
public class TemplateValidateCommand extends AbstractValidateCommand {
@Inject
private ModelValidator modelValidator;
@Override
public Integer call() throws Exception {
return this.call(
Template.class,
modelValidator,
(Object object) -> {
Template template = (Template) object;
return template.getNamespace() + " / " + template.getId();
},
(Object object) -> Collections.emptyList(),
(Object object) -> Collections.emptyList()
);
}
}

View File

@@ -0,0 +1,31 @@
package io.kestra.cli.commands.templates.namespaces;
import io.kestra.cli.AbstractCommand;
import io.kestra.cli.App;
import io.kestra.core.models.templates.TemplateEnabled;
import io.micronaut.configuration.picocli.PicocliRunner;
import lombok.SneakyThrows;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
@CommandLine.Command(
name = "namespace",
description = "Manage namespace templates",
mixinStandardHelpOptions = true,
subcommands = {
TemplateNamespaceUpdateCommand.class,
}
)
@Slf4j
@TemplateEnabled
public class TemplateNamespaceCommand extends AbstractCommand {
@SneakyThrows
@Override
public Integer call() throws Exception {
super.call();
PicocliRunner.call(App.class, "template", "namespace", "--help");
return 0;
}
}

View File

@@ -0,0 +1,74 @@
package io.kestra.cli.commands.templates.namespaces;
import io.kestra.cli.AbstractValidateCommand;
import io.kestra.cli.commands.AbstractServiceNamespaceUpdateCommand;
import io.kestra.cli.services.TenantIdSelectorService;
import io.kestra.core.models.templates.Template;
import io.kestra.core.models.templates.TemplateEnabled;
import io.kestra.core.serializers.YamlParser;
import io.micronaut.core.type.Argument;
import io.micronaut.http.HttpRequest;
import io.micronaut.http.MutableHttpRequest;
import io.micronaut.http.client.exceptions.HttpClientResponseException;
import io.micronaut.http.client.netty.DefaultHttpClient;
import jakarta.inject.Inject;
import lombok.extern.slf4j.Slf4j;
import picocli.CommandLine;
import java.nio.file.Files;
import java.util.List;
import jakarta.validation.ConstraintViolationException;
@CommandLine.Command(
name = "update",
description = "Update namespace templates",
mixinStandardHelpOptions = true
)
@Slf4j
@TemplateEnabled
public class TemplateNamespaceUpdateCommand extends AbstractServiceNamespaceUpdateCommand {
@Inject
private TenantIdSelectorService tenantService;
@Override
public Integer call() throws Exception {
super.call();
try (var files = Files.walk(directory)) {
List<Template> templates = files
.filter(Files::isRegularFile)
.filter(YamlParser::isValidExtension)
.map(path -> YamlParser.parse(path.toFile(), Template.class))
.toList();
if (templates.isEmpty()) {
stdOut("No template found on '{}'", directory.toFile().getAbsolutePath());
}
try (DefaultHttpClient client = client()) {
MutableHttpRequest<List<Template>> request = HttpRequest
.POST(apiUri("/templates/", tenantService.getTenantIdAndAllowEETenants(tenantId)) + namespace + "?delete=" + delete, templates);
List<UpdateResult> updated = client.toBlocking().retrieve(
this.requestOptions(request),
Argument.listOf(UpdateResult.class)
);
stdOut(updated.size() + " template(s) for namespace '" + namespace + "' successfully updated !");
updated.forEach(template -> stdOut("- " + template.getNamespace() + "." + template.getId()));
} catch (HttpClientResponseException e) {
AbstractValidateCommand.handleHttpException(e, "template");
return 1;
}
} catch (ConstraintViolationException e) {
AbstractValidateCommand.handleException(e, "template");
return 1;
}
return 0;
}
}

View File

@@ -30,15 +30,15 @@ micronaut:
read-idle-timeout: 60m
write-idle-timeout: 60m
idle-timeout: 60m
netty:
max-zstd-encode-size: 67108864 # increased to 64MB from the default of 32MB
max-chunk-size: 10MB
max-header-size: 32768 # increased from the default of 8k
responses:
file:
cache-seconds: 86400
cache-control:
public: true
netty:
max-zstd-encode-size: 67108864 # increased to 64MB from the default of 32MB
max-chunk-size: 10MB
max-header-size: 32768 # increased from the default of 8k
# Access log configuration, see https://docs.micronaut.io/latest/guide/index.html#accessLogger
access-logger:

View File

@@ -68,7 +68,8 @@ class NoConfigCommandTest {
assertThat(exitCode).isNotZero();
assertThat(out.toString()).isEmpty();
// check that the only log is an access log: this has the advantage to also check that access log is working!
assertThat(out.toString()).contains("POST /api/v1/main/flows HTTP/1.1 | status: 500");
assertThat(err.toString()).contains("No bean of type [io.kestra.core.repositories.FlowRepositoryInterface] exists");
}
}

View File

@@ -14,7 +14,7 @@ import static org.assertj.core.api.Assertions.assertThat;
class FlowDotCommandTest {
@Test
void run() {
URL directory = FlowDotCommandTest.class.getClassLoader().getResource("flows/same/first.yaml");
URL directory = TemplateValidateCommandTest.class.getClassLoader().getResource("flows/same/first.yaml");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));

View File

@@ -0,0 +1,41 @@
package io.kestra.cli.commands.flows;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import static org.assertj.core.api.Assertions.assertThat;
class FlowExpandCommandTest {
@SuppressWarnings("deprecation")
@Test
void run() {
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.builder().deduceEnvironment(false).start()) {
String[] args = {
"src/test/resources/helper/include.yaml"
};
Integer call = PicocliRunner.call(FlowExpandCommand.class, ctx, args);
assertThat(call).isZero();
assertThat(out.toString()).isEqualTo("id: include\n" +
"namespace: io.kestra.cli\n" +
"\n" +
"# The list of tasks\n" +
"tasks:\n" +
"- id: t1\n" +
" type: io.kestra.plugin.core.debug.Return\n" +
" format: \"Lorem ipsum dolor sit amet\"\n" +
"- id: t2\n" +
" type: io.kestra.plugin.core.debug.Return\n" +
" format: |\n" +
" Lorem ipsum dolor sit amet\n" +
" Lorem ipsum dolor sit amet\n");
}
}
}

View File

@@ -61,6 +61,7 @@ class FlowValidateCommandTest {
assertThat(call).isZero();
assertThat(out.toString()).contains("✓ - system / warning");
assertThat(out.toString()).contains("⚠ - tasks[0] is deprecated");
assertThat(out.toString()).contains(" - io.kestra.core.tasks.log.Log is replaced by io.kestra.plugin.core.log.Log");
}
}

View File

@@ -0,0 +1,62 @@
package io.kestra.cli.commands.flows;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.env.Environment;
import io.micronaut.runtime.server.EmbeddedServer;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.net.URL;
import static org.assertj.core.api.Assertions.assertThat;
class TemplateValidateCommandTest {
@Test
void runLocal() {
URL directory = TemplateValidateCommandTest.class.getClassLoader().getResource("invalids/empty.yaml");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setErr(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Environment.CLI, Environment.TEST)) {
String[] args = {
"--local",
directory.getPath()
};
Integer call = PicocliRunner.call(FlowValidateCommand.class, ctx, args);
assertThat(call).isEqualTo(1);
assertThat(out.toString()).contains("Unable to parse flow");
assertThat(out.toString()).contains("must not be empty");
}
}
@Test
void runServer() {
URL directory = TemplateValidateCommandTest.class.getClassLoader().getResource("invalids/empty.yaml");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setErr(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
String[] args = {
"--plugins",
"/tmp", // pass this arg because it can cause failure
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
directory.getPath()
};
Integer call = PicocliRunner.call(FlowValidateCommand.class, ctx, args);
assertThat(call).isEqualTo(1);
assertThat(out.toString()).contains("Unable to parse flow");
assertThat(out.toString()).contains("must not be empty");
}
}
}

View File

@@ -0,0 +1,27 @@
package io.kestra.cli.commands.sys.statestore;
import io.kestra.cli.commands.sys.database.DatabaseCommand;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import static org.assertj.core.api.Assertions.assertThat;
class StateStoreCommandTest {
@Test
void runWithNoParam() {
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.builder().deduceEnvironment(false).start()) {
String[] args = {};
Integer call = PicocliRunner.call(StateStoreCommand.class, ctx, args);
assertThat(call).isZero();
assertThat(out.toString()).contains("Usage: kestra sys state-store");
}
}
}

View File

@@ -0,0 +1,75 @@
package io.kestra.cli.commands.sys.statestore;
import io.kestra.core.exceptions.MigrationRequiredException;
import io.kestra.core.exceptions.ResourceExpiredException;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.models.flows.GenericFlow;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.runners.RunContext;
import io.kestra.core.runners.RunContextFactory;
import io.kestra.core.storages.StateStore;
import io.kestra.core.storages.StorageInterface;
import io.kestra.core.utils.Hashing;
import io.kestra.core.utils.Slugify;
import io.kestra.plugin.core.log.Log;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayInputStream;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.PrintStream;
import java.net.URI;
import java.util.List;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
class StateStoreMigrateCommandTest {
@Test
void runMigration() throws IOException, ResourceExpiredException {
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.builder().deduceEnvironment(false).environments("test").start()) {
FlowRepositoryInterface flowRepository = ctx.getBean(FlowRepositoryInterface.class);
Flow flow = Flow.builder()
.tenantId("my-tenant")
.id("a-flow")
.namespace("some.valid.namespace." + ((int) (Math.random() * 1000000)))
.tasks(List.of(Log.builder().id("log").type(Log.class.getName()).message("logging").build()))
.build();
flowRepository.create(GenericFlow.of(flow));
StorageInterface storage = ctx.getBean(StorageInterface.class);
String tenantId = flow.getTenantId();
URI oldStateStoreUri = URI.create("/" + flow.getNamespace().replace(".", "/") + "/" + Slugify.of("a-flow") + "/states/my-state/" + Hashing.hashToString("my-taskrun-value") + "/sub-name");
storage.put(
tenantId,
flow.getNamespace(),
oldStateStoreUri,
new ByteArrayInputStream("my-value".getBytes())
);
assertThat(storage.exists(tenantId, flow.getNamespace(), oldStateStoreUri)).isTrue();
RunContext runContext = ctx.getBean(RunContextFactory.class).of(flow, Map.of("flow", Map.of(
"tenantId", tenantId,
"id", flow.getId(),
"namespace", flow.getNamespace()
)));
StateStore stateStore = new StateStore(runContext, true);
Assertions.assertThrows(MigrationRequiredException.class, () -> stateStore.getState(true, "my-state", "sub-name", "my-taskrun-value"));
String[] args = {};
Integer call = PicocliRunner.call(StateStoreMigrateCommand.class, ctx, args);
assertThat(new String(stateStore.getState(true, "my-state", "sub-name", "my-taskrun-value").readAllBytes())).isEqualTo("my-value");
assertThat(storage.exists(tenantId, flow.getNamespace(), oldStateStoreUri)).isFalse();
assertThat(call).isZero();
}
}
}

View File

@@ -0,0 +1,65 @@
package io.kestra.cli.commands.templates;
import io.kestra.cli.commands.templates.namespaces.TemplateNamespaceUpdateCommand;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.env.Environment;
import io.micronaut.runtime.server.EmbeddedServer;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.IOException;
import java.io.PrintStream;
import java.net.URL;
import java.util.Map;
import java.util.zip.ZipFile;
import static org.assertj.core.api.Assertions.assertThat;
class TemplateExportCommandTest {
@Test
void run() throws IOException {
URL directory = TemplateExportCommandTest.class.getClassLoader().getResource("templates");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
// we use the update command to add templates to extract
String[] args = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"io.kestra.tests",
directory.getPath(),
};
PicocliRunner.call(TemplateNamespaceUpdateCommand.class, ctx, args);
assertThat(out.toString()).contains("3 template(s)");
// then we export them
String[] exportArgs = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"--namespace",
"io.kestra.tests",
"/tmp",
};
PicocliRunner.call(TemplateExportCommand.class, ctx, exportArgs);
File file = new File("/tmp/templates.zip");
assertThat(file.exists()).isTrue();
ZipFile zipFile = new ZipFile(file);
assertThat(zipFile.stream().count()).isEqualTo(3L);
file.delete();
}
}
}

View File

@@ -0,0 +1,61 @@
package io.kestra.cli.commands.templates;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.env.Environment;
import io.micronaut.runtime.server.EmbeddedServer;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.net.URL;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
class TemplateValidateCommandTest {
@Test
void runLocal() {
URL directory = TemplateValidateCommandTest.class.getClassLoader().getResource("invalidsTemplates/template.yml");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setErr(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
String[] args = {
"--local",
directory.getPath()
};
Integer call = PicocliRunner.call(TemplateValidateCommand.class, ctx, args);
assertThat(call).isEqualTo(1);
assertThat(out.toString()).contains("Unable to parse template");
assertThat(out.toString()).contains("must not be empty");
}
}
@Test
void runServer() {
URL directory = TemplateValidateCommandTest.class.getClassLoader().getResource("invalidsTemplates/template.yml");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setErr(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
String[] args = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
directory.getPath()
};
Integer call = PicocliRunner.call(TemplateValidateCommand.class, ctx, args);
assertThat(call).isEqualTo(1);
assertThat(out.toString()).contains("Unable to parse template");
assertThat(out.toString()).contains("must not be empty");
}
}
}

View File

@@ -0,0 +1,26 @@
package io.kestra.cli.commands.templates.namespaces;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import static org.assertj.core.api.Assertions.assertThat;
class TemplateNamespaceCommandTest {
@Test
void runWithNoParam() {
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.builder().deduceEnvironment(false).start()) {
String[] args = {};
Integer call = PicocliRunner.call(TemplateNamespaceCommand.class, ctx, args);
assertThat(call).isZero();
assertThat(out.toString()).contains("Usage: kestra template namespace");
}
}
}

View File

@@ -0,0 +1,112 @@
package io.kestra.cli.commands.templates.namespaces;
import io.micronaut.configuration.picocli.PicocliRunner;
import io.micronaut.context.ApplicationContext;
import io.micronaut.context.env.Environment;
import io.micronaut.runtime.server.EmbeddedServer;
import org.junit.jupiter.api.Test;
import java.io.ByteArrayOutputStream;
import java.io.PrintStream;
import java.net.URL;
import java.util.Map;
import static org.assertj.core.api.Assertions.assertThat;
class TemplateNamespaceUpdateCommandTest {
@Test
void run() {
URL directory = TemplateNamespaceUpdateCommandTest.class.getClassLoader().getResource("templates");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
String[] args = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"io.kestra.tests",
directory.getPath(),
};
PicocliRunner.call(TemplateNamespaceUpdateCommand.class, ctx, args);
assertThat(out.toString()).contains("3 template(s)");
}
}
@Test
void invalid() {
URL directory = TemplateNamespaceUpdateCommandTest.class.getClassLoader().getResource("invalidsTemplates");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setErr(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
String[] args = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"io.kestra.tests",
directory.getPath(),
};
Integer call = PicocliRunner.call(TemplateNamespaceUpdateCommand.class, ctx, args);
// assertThat(call, is(1));
assertThat(out.toString()).contains("Unable to parse templates");
assertThat(out.toString()).contains("must not be empty");
}
}
@Test
void runNoDelete() {
URL directory = TemplateNamespaceUpdateCommandTest.class.getClassLoader().getResource("templates");
URL subDirectory = TemplateNamespaceUpdateCommandTest.class.getClassLoader().getResource("templates/templatesSubFolder");
ByteArrayOutputStream out = new ByteArrayOutputStream();
System.setOut(new PrintStream(out));
try (ApplicationContext ctx = ApplicationContext.run(Map.of("kestra.templates.enabled", "true"), Environment.CLI, Environment.TEST)) {
EmbeddedServer embeddedServer = ctx.getBean(EmbeddedServer.class);
embeddedServer.start();
String[] args = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"io.kestra.tests",
directory.getPath(),
};
PicocliRunner.call(TemplateNamespaceUpdateCommand.class, ctx, args);
assertThat(out.toString()).contains("3 template(s)");
String[] newArgs = {
"--server",
embeddedServer.getURL().toString(),
"--user",
"myuser:pass:word",
"io.kestra.tests",
subDirectory.getPath(),
"--no-delete"
};
PicocliRunner.call(TemplateNamespaceUpdateCommand.class, ctx, newArgs);
assertThat(out.toString()).contains("1 template(s)");
}
}
}

View File

@@ -3,8 +3,8 @@ namespace: system
tasks:
- id: deprecated
type: io.kestra.plugin.core.log.Log
message: Hello World
type: io.kestra.plugin.core.debug.Echo
format: Hello World
- id: alias
type: io.kestra.core.tasks.log.Log
message: I'm an alias

View File

@@ -4,6 +4,7 @@ import io.kestra.core.models.dashboards.Dashboard;
import io.kestra.core.models.flows.Flow;
import io.kestra.core.models.flows.PluginDefault;
import io.kestra.core.models.tasks.Task;
import io.kestra.core.models.templates.Template;
import io.kestra.core.models.triggers.AbstractTrigger;
import jakarta.inject.Singleton;
@@ -35,6 +36,7 @@ public class JsonSchemaCache {
public JsonSchemaCache(final JsonSchemaGenerator jsonSchemaGenerator) {
this.jsonSchemaGenerator = Objects.requireNonNull(jsonSchemaGenerator, "JsonSchemaGenerator cannot be null");
registerClassForType(SchemaType.FLOW, Flow.class);
registerClassForType(SchemaType.TEMPLATE, Template.class);
registerClassForType(SchemaType.TASK, Task.class);
registerClassForType(SchemaType.TRIGGER, AbstractTrigger.class);
registerClassForType(SchemaType.PLUGINDEFAULT, PluginDefault.class);

View File

@@ -23,6 +23,7 @@ import com.google.common.collect.ImmutableMap;
import io.kestra.core.models.annotations.Plugin;
import io.kestra.core.models.annotations.PluginProperty;
import io.kestra.core.models.conditions.Condition;
import io.kestra.core.models.conditions.ScheduleCondition;
import io.kestra.core.models.dashboards.DataFilter;
import io.kestra.core.models.dashboards.DataFilterKPI;
import io.kestra.core.models.dashboards.charts.Chart;
@@ -63,7 +64,7 @@ import static io.kestra.core.serializers.JacksonMapper.MAP_TYPE_REFERENCE;
@Singleton
@Slf4j
public class JsonSchemaGenerator {
private static final List<Class<?>> TYPES_RESOLVED_AS_STRING = List.of(Duration.class, LocalTime.class, LocalDate.class, LocalDateTime.class, ZonedDateTime.class, OffsetDateTime.class, OffsetTime.class);
private static final List<Class<?>> SUBTYPE_RESOLUTION_EXCLUSION_FOR_PLUGIN_SCHEMA = List.of(Task.class, AbstractTrigger.class);
@@ -276,8 +277,8 @@ public class JsonSchemaGenerator {
.with(Option.DEFINITION_FOR_MAIN_SCHEMA)
.with(Option.PLAIN_DEFINITION_KEYS)
.with(Option.ALLOF_CLEANUP_AT_THE_END);
// HACK: Registered a custom JsonUnwrappedDefinitionProvider prior to the JacksonModule
// HACK: Registered a custom JsonUnwrappedDefinitionProvider prior to the JacksonModule
// to be able to return an CustomDefinition with an empty node when the ResolvedType can't be found.
builder.forTypesInGeneral().withCustomDefinitionProvider(new JsonUnwrappedDefinitionProvider(){
@Override
@@ -319,7 +320,7 @@ public class JsonSchemaGenerator {
// inline some type
builder.forTypesInGeneral()
.withCustomDefinitionProvider(new CustomDefinitionProviderV2() {
@Override
public CustomDefinition provideCustomSchemaDefinition(ResolvedType javaType, SchemaGenerationContext context) {
if (javaType.isInstanceOf(Map.class) || javaType.isInstanceOf(Enum.class)) {
@@ -687,6 +688,15 @@ public class JsonSchemaGenerator {
.filter(Predicate.not(io.kestra.core.models.Plugin::isInternal))
.flatMap(clz -> safelyResolveSubtype(declaredType, clz, typeContext).stream())
.toList();
} else if (declaredType.getErasedType() == ScheduleCondition.class) {
return getRegisteredPlugins()
.stream()
.flatMap(registeredPlugin -> registeredPlugin.getConditions().stream())
.filter(ScheduleCondition.class::isAssignableFrom)
.filter(p -> allowedPluginTypes.isEmpty() || allowedPluginTypes.contains(p.getName()))
.filter(Predicate.not(io.kestra.core.models.Plugin::isInternal))
.flatMap(clz -> safelyResolveSubtype(declaredType, clz, typeContext).stream())
.toList();
} else if (declaredType.getErasedType() == TaskRunner.class) {
return getRegisteredPlugins()
.stream()

View File

@@ -6,6 +6,7 @@ import io.kestra.core.utils.Enums;
public enum SchemaType {
FLOW,
TEMPLATE,
TASK,
TRIGGER,
PLUGINDEFAULT,

View File

@@ -1,27 +0,0 @@
package io.kestra.core.lock;
import io.kestra.core.models.HasUID;
import io.kestra.core.utils.IdUtils;
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import java.time.Instant;
import java.time.LocalDateTime;
@Getter
@Builder
@NoArgsConstructor
@AllArgsConstructor
public class Lock implements HasUID {
private String category;
private String id;
private String owner;
private Instant createdAt;
@Override
public String uid() {
return IdUtils.fromParts(this.category, this.id);
}
}

View File

@@ -1,13 +0,0 @@
package io.kestra.core.lock;
import io.kestra.core.exceptions.KestraRuntimeException;
public class LockException extends KestraRuntimeException {
public LockException(String message) {
super(message);
}
public LockException(Throwable cause) {
super(cause);
}
}

View File

@@ -1,195 +0,0 @@
package io.kestra.core.lock;
import io.kestra.core.repositories.LockRepositoryInterface;
import io.kestra.core.server.ServerInstance;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import lombok.extern.slf4j.Slf4j;
import java.time.Duration;
import java.time.Instant;
import java.time.LocalDateTime;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.Callable;
/**
* This service provides facility for executing Runnable and Callable tasks inside a lock.
* Note: it may be handy to provide a tryLock facility that, if locked, skips executing the Runnable or Callable and exits immediately.
*
* @implNote There is no expiry for locks, so a service may hold a lock infinitely until the service is restarted as the
* liveness mechanism releases all locks when the service is unreachable.
* This may be improved at some point by adding an expiry (for ex 30s) and running a thread that will periodically
* increase the expiry for all exiting locks. This should allow quicker recovery of zombie locks than relying on the liveness mechanism,
* as a service wanted to lock an expired lock would be able to take it over.
*/
@Slf4j
@Singleton
public class LockService {
private static final Duration DEFAULT_TIMEOUT = Duration.ofSeconds(300);
private static final int DEFAULT_SLEEP_MS = 1;
private final LockRepositoryInterface lockRepository;
@Inject
public LockService(LockRepositoryInterface lockRepository) {
this.lockRepository = lockRepository;
}
/**
* Executes a Runnable inside a lock.
* If the lock is already taken, it will wait for at most the default lock timeout of 5mn.
* @see #doInLock(String, String, Duration, Runnable)
*
* @param category lock category, ex 'executions'
* @param id identifier of the lock identity inside the category, ex an execution ID
*
* @throws LockException if the lock cannot be hold before the timeout or the thread is interrupted.
*/
public void doInLock(String category, String id, Runnable runnable) {
doInLock(category, id, DEFAULT_TIMEOUT, runnable);
}
/**
* Executes a Runnable inside a lock.
* If the lock is already taken, it will wait for at most the <code>timeout</code> duration.
* @see #doInLock(String, String, Runnable)
*
* @param category lock category, ex 'executions'
* @param id identifier of the lock identity inside the category, ex an execution ID
* @param timeout how much time to wait for the lock if another process already holds the same lock
*
* @throws LockException if the lock cannot be hold before the timeout or the thread is interrupted.
*/
public void doInLock(String category, String id, Duration timeout, Runnable runnable) {
if (!lock(category, id, timeout)) {
throw new LockException("Unable to hold the lock inside the configured timeout of " + timeout);
}
try {
runnable.run();
} finally {
unlock(category, id);
}
}
/**
* Attempts to execute the provided {@code runnable} within a lock.
* If the lock is already held by another process, the execution is skipped.
*
* @param category the category of the lock, e.g., 'executions'
* @param id the identifier of the lock within the specified category, e.g., an execution ID
* @param runnable the task to be executed if the lock is successfully acquired
*/
public void tryLock(String category, String id, Runnable runnable) {
if (lock(category, id, Duration.ZERO)) {
try {
runnable.run();
} finally {
unlock(category, id);
}
} else {
log.debug("Lock '{}'.'{}' already hold, skipping", category, id);
}
}
/**
* Executes a Callable inside a lock.
* If the lock is already taken, it will wait for at most the default lock timeout of 5mn.
*
* @param category lock category, ex 'executions'
* @param id identifier of the lock identity inside the category, ex an execution ID
*
* @throws LockException if the lock cannot be hold before the timeout or the thread is interrupted.
*/
public <T> T callInLock(String category, String id, Callable<T> callable) throws Exception {
return callInLock(category, id, DEFAULT_TIMEOUT, callable);
}
/**
* Executes a Callable inside a lock.
* If the lock is already taken, it will wait for at most the <code>timeout</code> duration.
*
* @param category lock category, ex 'executions'
* @param id identifier of the lock identity inside the category, ex an execution ID
* @param timeout how much time to wait for the lock if another process already holds the same lock
*
* @throws LockException if the lock cannot be hold before the timeout or the thread is interrupted.
*/
public <T> T callInLock(String category, String id, Duration timeout, Callable<T> callable) throws Exception {
if (!lock(category, id, timeout)) {
throw new LockException("Unable to hold the lock inside the configured timeout of " + timeout);
}
try {
return callable.call();
} finally {
unlock(category, id);
}
}
/**
* Release all locks hold by this service identifier.
*/
public List<Lock> releaseAllLocks(String serviceId) {
return lockRepository.deleteByOwner(serviceId);
}
/**
* @return true if the lock identified by this category and identifier already exist.
*/
public boolean isLocked(String category, String id) {
return lockRepository.findById(category, id).isPresent();
}
private boolean lock(String category, String id, Duration timeout) throws LockException {
log.debug("Locking '{}'.'{}'", category, id);
long deadline = System.currentTimeMillis() + timeout.toMillis();
do {
Optional<Lock> existing = lockRepository.findById(category, id);
if (existing.isEmpty()) {
// we can try to lock!
Lock newLock = new Lock(category, id, ServerInstance.INSTANCE_ID, Instant.now());
if (lockRepository.create(newLock)) {
return true;
} else {
log.debug("Cannot create the lock, it may have been created after we check for its existence and before we create it");
}
} else {
log.debug("Already locked by: {}", existing.get().getOwner());
}
// fast path for when we don't want to wait for the lock
if (timeout.isZero()) {
return false;
}
try {
Thread.sleep(DEFAULT_SLEEP_MS);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
throw new LockException(e);
}
} while (System.currentTimeMillis() < deadline);
log.debug("Lock already hold, waiting for it to be released");
return false;
}
private void unlock(String category, String id) {
log.debug("Unlocking '{}'.'{}'", category, id);
Optional<Lock> existing = lockRepository.findById(category, id);
if (existing.isEmpty()) {
log.warn("Try to unlock unknown lock '{}'.'{}', ignoring it", category, id);
return;
}
if (!existing.get().getOwner().equals(ServerInstance.INSTANCE_ID)) {
log.warn("Try to unlock a lock we no longer own '{}'.'{}', ignoring it", category, id);
return;
}
lockRepository.deleteById(category, id);
}
}

View File

@@ -2,11 +2,7 @@ package io.kestra.core.models.conditions;
import io.kestra.core.exceptions.InternalException;
/**
* Conditions of type ScheduleCondition have a special behavior inside the {@link io.kestra.plugin.core.trigger.Schedule} trigger.
* They are evaluated specifically and would be taken into account when computing the next evaluation date.
* Only conditions based on date should be marked as ScheduleCondition.
*/
public interface ScheduleCondition {
boolean test(ConditionContext conditionContext) throws InternalException;
}

View File

@@ -5,6 +5,8 @@ import io.kestra.core.models.annotations.Plugin;
import io.kestra.core.models.dashboards.filters.AbstractFilter;
import io.kestra.core.repositories.QueryBuilderInterface;
import io.kestra.plugin.core.dashboard.data.IData;
import jakarta.annotation.Nullable;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotBlank;
import jakarta.validation.constraints.NotNull;
import jakarta.validation.constraints.Pattern;
@@ -33,9 +35,12 @@ public abstract class DataFilter<F extends Enum<F>, C extends ColumnDescriptor<F
@Pattern(regexp = JAVA_IDENTIFIER_REGEX)
private String type;
@Valid
private Map<String, C> columns;
@Setter
@Valid
@Nullable
private List<AbstractFilter<F>> where;
private List<OrderBy> orderBy;

View File

@@ -5,6 +5,7 @@ import io.kestra.core.models.annotations.Plugin;
import io.kestra.core.models.dashboards.ChartOption;
import io.kestra.core.models.dashboards.DataFilter;
import io.kestra.core.validations.DataChartValidation;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotNull;
import lombok.EqualsAndHashCode;
import lombok.Getter;
@@ -20,6 +21,7 @@ import lombok.experimental.SuperBuilder;
@DataChartValidation
public abstract class DataChart<P extends ChartOption, D extends DataFilter<?, ?>> extends Chart<P> implements io.kestra.core.models.Plugin {
@NotNull
@Valid
private D data;
public Integer minNumberOfAggregations() {

View File

@@ -1,8 +1,11 @@
package io.kestra.core.models.dashboards.filters;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.annotation.JsonSubTypes;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import io.micronaut.core.annotation.Introspected;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotNull;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.experimental.SuperBuilder;
@@ -32,6 +35,9 @@ import lombok.experimental.SuperBuilder;
@SuperBuilder
@Introspected
public abstract class AbstractFilter<F extends Enum<F>> {
@NotNull
@JsonProperty(value = "field", required = true)
@Valid
private F field;
private String labelKey;

View File

@@ -12,6 +12,7 @@ import io.kestra.core.exceptions.InternalException;
import io.kestra.core.models.HasUID;
import io.kestra.core.models.annotations.PluginProperty;
import io.kestra.core.models.flows.sla.SLA;
import io.kestra.core.models.listeners.Listener;
import io.kestra.core.models.tasks.FlowableTask;
import io.kestra.core.models.tasks.Task;
import io.kestra.core.models.tasks.retrys.AbstractRetry;
@@ -84,6 +85,10 @@ public class Flow extends AbstractFlow implements HasUID {
return this._finally;
}
@Valid
@Deprecated
List<Listener> listeners;
@Valid
List<Task> afterExecution;
@@ -93,6 +98,20 @@ public class Flow extends AbstractFlow implements HasUID {
@Valid
List<PluginDefault> pluginDefaults;
@Valid
List<PluginDefault> taskDefaults;
@Deprecated
public void setTaskDefaults(List<PluginDefault> taskDefaults) {
this.pluginDefaults = taskDefaults;
this.taskDefaults = taskDefaults;
}
@Deprecated
public List<PluginDefault> getTaskDefaults() {
return this.taskDefaults;
}
@Valid
Concurrency concurrency;
@@ -125,7 +144,7 @@ public class Flow extends AbstractFlow implements HasUID {
this.tasks != null ? this.tasks : Collections.<Task>emptyList(),
this.errors != null ? this.errors : Collections.<Task>emptyList(),
this._finally != null ? this._finally : Collections.<Task>emptyList(),
this.afterExecution != null ? this.afterExecution : Collections.<Task>emptyList()
this.afterExecutionTasks()
)
.flatMap(Collection::stream);
}
@@ -226,6 +245,55 @@ public class Flow extends AbstractFlow implements HasUID {
.orElse(null);
}
/**
* @deprecated should not be used
*/
@Deprecated(forRemoval = true, since = "0.21.0")
public Flow updateTask(String taskId, Task newValue) throws InternalException {
Task task = this.findTaskByTaskId(taskId);
Flow flow = this instanceof FlowWithSource flowWithSource ? flowWithSource.toFlow() : this;
Map<String, Object> map = NON_DEFAULT_OBJECT_MAPPER.convertValue(flow, JacksonMapper.MAP_TYPE_REFERENCE);
return NON_DEFAULT_OBJECT_MAPPER.convertValue(
recursiveUpdate(map, task, newValue),
Flow.class
);
}
private static Object recursiveUpdate(Object object, Task previous, Task newValue) {
if (object instanceof Map<?, ?> value) {
if (value.containsKey("id") && value.get("id").equals(previous.getId()) &&
value.containsKey("type") && value.get("type").equals(previous.getType())
) {
return NON_DEFAULT_OBJECT_MAPPER.convertValue(newValue, JacksonMapper.MAP_TYPE_REFERENCE);
} else {
return value
.entrySet()
.stream()
.map(e -> new AbstractMap.SimpleEntry<>(
e.getKey(),
recursiveUpdate(e.getValue(), previous, newValue)
))
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
}
} else if (object instanceof Collection<?> value) {
return value
.stream()
.map(r -> recursiveUpdate(r, previous, newValue))
.toList();
} else {
return object;
}
}
private List<Task> afterExecutionTasks() {
return ListUtils.concat(
ListUtils.emptyOnNull(this.getListeners()).stream().flatMap(listener -> listener.getTasks().stream()).toList(),
this.getAfterExecution()
);
}
public boolean equalsWithoutRevision(FlowInterface o) {
try {
return WITHOUT_REVISION_OBJECT_MAPPER.writeValueAsString(this).equals(WITHOUT_REVISION_OBJECT_MAPPER.writeValueAsString(o));

View File

@@ -19,6 +19,7 @@ public class FlowWithSource extends Flow {
String source;
@SuppressWarnings("deprecation")
public Flow toFlow() {
return Flow.builder()
.tenantId(this.tenantId)
@@ -33,6 +34,7 @@ public class FlowWithSource extends Flow {
.tasks(this.tasks)
.errors(this.errors)
._finally(this._finally)
.listeners(this.listeners)
.afterExecution(this.afterExecution)
.triggers(this.triggers)
.pluginDefaults(this.pluginDefaults)
@@ -58,6 +60,7 @@ public class FlowWithSource extends Flow {
.build();
}
@SuppressWarnings("deprecation")
public static FlowWithSource of(Flow flow, String source) {
return FlowWithSource.builder()
.tenantId(flow.tenantId)
@@ -73,6 +76,7 @@ public class FlowWithSource extends Flow {
.errors(flow.errors)
._finally(flow._finally)
.afterExecution(flow.afterExecution)
.listeners(flow.listeners)
.triggers(flow.triggers)
.pluginDefaults(flow.pluginDefaults)
.disabled(flow.disabled)

View File

@@ -1,5 +1,6 @@
package io.kestra.core.models.flows;
import com.fasterxml.jackson.annotation.JsonSetter;
import com.fasterxml.jackson.annotation.JsonSubTypes;
import com.fasterxml.jackson.annotation.JsonTypeInfo;
import io.kestra.core.models.flows.input.*;
@@ -25,6 +26,7 @@ import lombok.experimental.SuperBuilder;
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, property = "type", visible = true, include = JsonTypeInfo.As.EXISTING_PROPERTY)
@JsonSubTypes({
@JsonSubTypes.Type(value = ArrayInput.class, name = "ARRAY"),
@JsonSubTypes.Type(value = BooleanInput.class, name = "BOOLEAN"),
@JsonSubTypes.Type(value = BoolInput.class, name = "BOOL"),
@JsonSubTypes.Type(value = DateInput.class, name = "DATE"),
@JsonSubTypes.Type(value = DateTimeInput.class, name = "DATETIME"),
@@ -35,6 +37,7 @@ import lombok.experimental.SuperBuilder;
@JsonSubTypes.Type(value = JsonInput.class, name = "JSON"),
@JsonSubTypes.Type(value = SecretInput.class, name = "SECRET"),
@JsonSubTypes.Type(value = StringInput.class, name = "STRING"),
@JsonSubTypes.Type(value = EnumInput.class, name = "ENUM"),
@JsonSubTypes.Type(value = SelectInput.class, name = "SELECT"),
@JsonSubTypes.Type(value = TimeInput.class, name = "TIME"),
@JsonSubTypes.Type(value = URIInput.class, name = "URI"),
@@ -52,6 +55,9 @@ public abstract class Input<T> implements Data {
@Pattern(regexp="^[a-zA-Z0-9][.a-zA-Z0-9_-]*")
String id;
@Deprecated
String name;
@Schema(
title = "The type of the input."
)
@@ -89,4 +95,13 @@ public abstract class Input<T> implements Data {
String displayName;
public abstract void validate(T input) throws ConstraintViolationException;
@JsonSetter
public void setName(String name) {
if (this.id == null) {
this.id = name;
}
this.name = name;
}
}

View File

@@ -9,9 +9,11 @@ import io.micronaut.core.annotation.Introspected;
@Introspected
public enum Type {
STRING(StringInput.class.getName()),
ENUM(EnumInput.class.getName()),
SELECT(SelectInput.class.getName()),
INT(IntInput.class.getName()),
FLOAT(FloatInput.class.getName()),
BOOLEAN(BooleanInput.class.getName()),
BOOL(BoolInput.class.getName()),
DATETIME(DateTimeInput.class.getName()),
DATE(DateInput.class.getName()),

View File

@@ -0,0 +1,19 @@
package io.kestra.core.models.flows.input;
import io.kestra.core.models.flows.Input;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.experimental.SuperBuilder;
import jakarta.validation.ConstraintViolationException;
@SuperBuilder
@Getter
@NoArgsConstructor
@Deprecated
public class BooleanInput extends Input<Boolean> {
@Override
public void validate(Boolean input) throws ConstraintViolationException {
// no validation yet
}
}

View File

@@ -0,0 +1,39 @@
package io.kestra.core.models.flows.input;
import io.kestra.core.models.flows.Input;
import io.kestra.core.models.validations.ManualConstraintViolation;
import io.kestra.core.validations.Regex;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.ConstraintViolationException;
import jakarta.validation.constraints.NotNull;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.experimental.SuperBuilder;
import java.util.List;
@SuperBuilder
@Getter
@NoArgsConstructor
@Deprecated
public class EnumInput extends Input<String> {
@Schema(
title = "List of values.",
description = "DEPRECATED; use 'SELECT' instead."
)
@NotNull
List<@Regex String> values;
@Override
public void validate(String input) throws ConstraintViolationException {
if (!values.contains(input) && this.getRequired()) {
throw ManualConstraintViolation.toConstraintViolationException(
"it must match the values `" + values + "`",
this,
EnumInput.class,
getId(),
input
);
}
}
}

View File

@@ -4,6 +4,8 @@ import java.util.Set;
import io.kestra.core.models.flows.Input;
import io.kestra.core.validations.FileInputValidation;
import jakarta.validation.ConstraintViolationException;
import jakarta.validation.constraints.NotNull;
import lombok.Builder;
import lombok.Getter;
import lombok.NoArgsConstructor;
import lombok.experimental.SuperBuilder;
@@ -17,14 +19,17 @@ import java.util.List;
@FileInputValidation
public class FileInput extends Input<URI> {
public static final String DEFAULT_EXTENSION = ".upl";
private static final String DEFAULT_EXTENSION = ".upl";
@Deprecated(since = "0.24", forRemoval = true)
public String extension;
/**
* List of allowed file extensions (e.g., [".csv", ".txt", ".pdf"]).
* Each extension must start with a dot.
*/
private List<String> allowedFileExtensions;
/**
* Gets the file extension from the URI's path
*/
@@ -48,4 +53,15 @@ public class FileInput extends Input<URI> {
);
}
}
public static String findFileInputExtension(@NotNull final List<Input<?>> inputs, @NotNull final String fileName) {
String res = inputs.stream()
.filter(in -> in instanceof FileInput)
.filter(in -> in.getId().equals(fileName))
.filter(flowInput -> ((FileInput) flowInput).getExtension() != null)
.map(flowInput -> ((FileInput) flowInput).getExtension())
.findFirst()
.orElse(FileInput.DEFAULT_EXTENSION);
return res.startsWith(".") ? res : "." + res;
}
}

View File

@@ -0,0 +1,12 @@
package io.kestra.core.models.flows.sla;
import java.time.Instant;
import java.util.function.Consumer;
public interface SLAMonitorStorage {
void save(SLAMonitor slaMonitor);
void purge(String executionId);
void processExpired(Instant now, Consumer<SLAMonitor> consumer);
}

View File

@@ -0,0 +1,25 @@
package io.kestra.core.models.listeners;
import io.micronaut.core.annotation.Introspected;
import lombok.Builder;
import lombok.Value;
import io.kestra.core.models.conditions.Condition;
import io.kestra.core.models.tasks.Task;
import java.util.List;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotEmpty;
@Value
@Builder
@Introspected
public class Listener {
String description;
@Valid
List<Condition> conditions;
@Valid
@NotEmpty
List<Task> tasks;
}

View File

@@ -54,7 +54,12 @@ public class Property<T> {
private String expression;
private T value;
private Property(String expression) {
/**
* @deprecated use {@link #ofExpression(String)} instead.
*/
@Deprecated
// Note: when not used, this constructor would not be deleted but made private so it can only be used by ofExpression(String) and the deserializer
public Property(String expression) {
this.expression = expression;
}
@@ -118,6 +123,14 @@ public class Property<T> {
return p;
}
/**
* @deprecated use {@link #ofValue(Object)} instead.
*/
@Deprecated
public static <V> Property<V> of(V value) {
return ofValue(value);
}
/**
* Build a new Property object with a Pebble expression.<br>
* <p>

View File

@@ -15,10 +15,31 @@ public class TaskException extends Exception {
private transient AbstractLogConsumer logConsumer;
/**
* This constructor will certainly be removed in 0.21 as we keep it only because all task runners must be impacted.
* @deprecated use {@link #TaskException(int, AbstractLogConsumer)} instead.
*/
@Deprecated(forRemoval = true, since = "0.20.0")
public TaskException(int exitCode, int stdOutCount, int stdErrCount) {
this("Command failed with exit code " + exitCode, exitCode, stdOutCount, stdErrCount);
}
public TaskException(int exitCode, AbstractLogConsumer logConsumer) {
this("Command failed with exit code " + exitCode, exitCode, logConsumer);
}
/**
* This constructor will certainly be removed in 0.21 as we keep it only because all task runners must be impacted.
* @deprecated use {@link #TaskException(String, int, AbstractLogConsumer)} instead.
*/
@Deprecated(forRemoval = true, since = "0.20.0")
public TaskException(String message, int exitCode, int stdOutCount, int stdErrCount) {
super(message);
this.exitCode = exitCode;
this.stdOutCount = stdOutCount;
this.stdErrCount = stdErrCount;
}
public TaskException(String message, int exitCode, AbstractLogConsumer logConsumer) {
super(message);
this.exitCode = exitCode;

View File

@@ -0,0 +1,156 @@
package io.kestra.core.models.templates;
import com.fasterxml.jackson.annotation.JsonIgnore;
import com.fasterxml.jackson.annotation.JsonInclude;
import com.fasterxml.jackson.annotation.JsonProperty;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.introspect.AnnotatedMember;
import com.fasterxml.jackson.databind.introspect.JacksonAnnotationIntrospector;
import io.kestra.core.models.DeletedInterface;
import io.kestra.core.models.HasUID;
import io.kestra.core.models.TenantInterface;
import io.kestra.core.models.tasks.Task;
import io.kestra.core.models.validations.ManualConstraintViolation;
import io.kestra.core.serializers.JacksonMapper;
import io.kestra.core.utils.IdUtils;
import io.micronaut.core.annotation.Introspected;
import io.swagger.v3.oas.annotations.Hidden;
import lombok.*;
import lombok.experimental.SuperBuilder;
import java.util.*;
import jakarta.validation.ConstraintViolation;
import jakarta.validation.ConstraintViolationException;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotBlank;
import jakarta.validation.constraints.NotEmpty;
import jakarta.validation.constraints.NotNull;
import jakarta.validation.constraints.Pattern;
@SuperBuilder(toBuilder = true)
@Getter
@AllArgsConstructor
@NoArgsConstructor
@Introspected
@ToString
@EqualsAndHashCode
public class Template implements DeletedInterface, TenantInterface, HasUID {
private static final ObjectMapper YAML_MAPPER = JacksonMapper.ofYaml().copy()
.setAnnotationIntrospector(new JacksonAnnotationIntrospector() {
@Override
public boolean hasIgnoreMarker(final AnnotatedMember m) {
List<String> exclusions = Arrays.asList("revision", "deleted", "source");
return exclusions.contains(m.getName()) || super.hasIgnoreMarker(m);
}
})
.setDefaultPropertyInclusion(JsonInclude.Include.NON_DEFAULT);
@Setter
@Hidden
@Pattern(regexp = "^[a-z0-9][a-z0-9_-]*")
private String tenantId;
@NotNull
@NotBlank
@Pattern(regexp = "^[a-zA-Z0-9][a-zA-Z0-9._-]*")
private String id;
@NotNull
@Pattern(regexp="^[a-z0-9][a-z0-9._-]*")
private String namespace;
String description;
@Valid
@NotEmpty
private List<Task> tasks;
@Valid
private List<Task> errors;
@Valid
@JsonProperty("finally")
@Getter(AccessLevel.NONE)
protected List<Task> _finally;
public List<Task> getFinally() {
return this._finally;
}
@NotNull
@Builder.Default
private final boolean deleted = false;
/** {@inheritDoc **/
@Override
@JsonIgnore
public String uid() {
return Template.uid(
this.getTenantId(),
this.getNamespace(),
this.getId()
);
}
@JsonIgnore
public static String uid(String tenantId, String namespace, String id) {
return IdUtils.fromParts(
tenantId,
namespace,
id
);
}
public Optional<ConstraintViolationException> validateUpdate(Template updated) {
Set<ConstraintViolation<?>> violations = new HashSet<>();
if (!updated.getId().equals(this.getId())) {
violations.add(ManualConstraintViolation.of(
"Illegal template id update",
updated,
Template.class,
"template.id",
updated.getId()
));
}
if (!updated.getNamespace().equals(this.getNamespace())) {
violations.add(ManualConstraintViolation.of(
"Illegal namespace update",
updated,
Template.class,
"template.namespace",
updated.getNamespace()
));
}
if (!violations.isEmpty()) {
return Optional.of(new ConstraintViolationException(violations));
} else {
return Optional.empty();
}
}
public String generateSource() {
try {
return YAML_MAPPER.writeValueAsString(this);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public Template toDeleted() {
return new Template(
this.tenantId,
this.id,
this.namespace,
this.description,
this.tasks,
this.errors,
this._finally,
true
);
}
}

View File

@@ -0,0 +1,15 @@
package io.kestra.core.models.templates;
import io.micronaut.context.annotation.Requires;
import io.micronaut.core.util.StringUtils;
import java.lang.annotation.*;
@Documented
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.PACKAGE, ElementType.TYPE})
@Requires(property = "kestra.templates.enabled", value = StringUtils.TRUE, defaultValue = StringUtils.FALSE)
@Inherited
public @interface TemplateEnabled {
}

View File

@@ -0,0 +1,18 @@
package io.kestra.core.models.templates;
import io.micronaut.core.annotation.Introspected;
import lombok.*;
import lombok.experimental.SuperBuilder;
import lombok.extern.jackson.Jacksonized;
@SuperBuilder
@Getter
@AllArgsConstructor
@NoArgsConstructor
@Introspected
@ToString
@EqualsAndHashCode
public class TemplateSource extends Template {
String source;
String exception;
}

View File

@@ -5,19 +5,22 @@ import io.kestra.core.models.executions.ExecutionKilled;
import io.kestra.core.models.executions.LogEntry;
import io.kestra.core.models.executions.MetricEntry;
import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.models.templates.Template;
import io.kestra.core.models.triggers.Trigger;
import io.kestra.core.runners.*;
public interface QueueFactoryInterface {
String EXECUTION_NAMED = "executionQueue";
String EXECUTION_EVENT_NAMED = "executionEventQueue";
String EXECUTOR_NAMED = "executorQueue";
String WORKERJOB_NAMED = "workerJobQueue";
String WORKERTASKRESULT_NAMED = "workerTaskResultQueue";
String WORKERTRIGGERRESULT_NAMED = "workerTriggerResultQueue";
String FLOW_NAMED = "flowQueue";
String TEMPLATE_NAMED = "templateQueue";
String WORKERTASKLOG_NAMED = "workerTaskLogQueue";
String METRIC_QUEUE = "workerTaskMetricQueue";
String KILL_NAMED = "executionKilledQueue";
String WORKERINSTANCE_NAMED = "workerInstanceQueue";
String WORKERJOBRUNNING_NAMED = "workerJobRunningQueue";
String TRIGGER_NAMED = "triggerQueue";
String SUBFLOWEXECUTIONRESULT_NAMED = "subflowExecutionResultQueue";
@@ -27,7 +30,7 @@ public interface QueueFactoryInterface {
QueueInterface<Execution> execution();
QueueInterface<ExecutionEvent> executionEvent();
QueueInterface<Executor> executor();
WorkerJobQueueInterface workerJob();
@@ -43,6 +46,10 @@ public interface QueueFactoryInterface {
QueueInterface<ExecutionKilled> kill();
QueueInterface<Template> template();
QueueInterface<WorkerInstance> workerInstance();
QueueInterface<WorkerJobRunning> workerJobRunning();
QueueInterface<Trigger> trigger();

View File

@@ -35,24 +35,6 @@ public interface QueueInterface<T> extends Closeable, Pauseable {
void delete(String consumerGroup, T message) throws QueueException;
/**
* Delete all messages of the queue for this key.
* This is used to purge a queue for a specific key.
* A queue implementation may omit to implement it and purge records differently.
*/
default void deleteByKey(String key) throws QueueException {
// by default do nothing
}
/**
* Delete all messages of the queue for a set of keys.
* This is used to purge a queue for specific keys.
* A queue implementation may omit to implement it and purge records differently.
*/
default void deleteByKeys(List<String> keys) throws QueueException {
// by default do nothing
}
default Runnable receive(Consumer<Either<T, DeserializationException>> consumer) {
return receive(null, consumer, false);
}
@@ -72,20 +54,4 @@ public interface QueueInterface<T> extends Closeable, Pauseable {
}
Runnable receive(String consumerGroup, Class<?> queueType, Consumer<Either<T, DeserializationException>> consumer, boolean forUpdate);
default Runnable receiveBatch(Class<?> queueType, Consumer<List<Either<T, DeserializationException>>> consumer) {
return receiveBatch(null, queueType, consumer);
}
default Runnable receiveBatch(String consumerGroup, Class<?> queueType, Consumer<List<Either<T, DeserializationException>>> consumer) {
return receiveBatch(consumerGroup, queueType, consumer, true);
}
/**
* Consumer a batch of messages.
* By default, it consumes a single message, a queue implementation may implement it to support batch consumption.
*/
default Runnable receiveBatch(String consumerGroup, Class<?> queueType, Consumer<List<Either<T, DeserializationException>>> consumer, boolean forUpdate) {
return receive(consumerGroup, either -> consumer.accept(List.of(either)), forUpdate);
}
}

View File

@@ -19,8 +19,12 @@ public class QueueService {
return ((SubflowExecution<?>) object).getExecution().getId();
} else if (object.getClass() == SubflowExecutionResult.class) {
return ((SubflowExecutionResult) object).getExecutionId();
} else if (object.getClass() == ExecutorState.class) {
return ((ExecutorState) object).getExecutionId();
} else if (object.getClass() == Setting.class) {
return ((Setting) object).getKey();
} else if (object.getClass() == Executor.class) {
return ((Executor) object).getExecution().getId();
} else if (object.getClass() == MetricEntry.class) {
return null;
} else if (object.getClass() == SubflowExecutionEnd.class) {

View File

@@ -1,25 +0,0 @@
package io.kestra.core.repositories;
import io.kestra.core.runners.ConcurrencyLimit;
import jakarta.validation.constraints.NotNull;
import java.util.List;
import java.util.Optional;
public interface ConcurrencyLimitRepositoryInterface {
/**
* Update a concurrency limit
* WARNING: this is inherently unsafe and must only be used for administration
*/
ConcurrencyLimit update(ConcurrencyLimit concurrencyLimit);
/**
* Returns all concurrency limits from the database for a given tenant
*/
List<ConcurrencyLimit> find(String tenantId);
/**
* Find a concurrency limit by its id
*/
Optional<ConcurrencyLimit> findById(@NotNull String tenantId, @NotNull String namespace, @NotNull String flowId);
}

View File

@@ -2,6 +2,7 @@ package io.kestra.core.repositories;
import io.kestra.core.models.QueryFilter;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.executions.TaskRun;
import io.kestra.core.models.executions.statistics.DailyExecutionStatistics;
import io.kestra.core.models.executions.statistics.ExecutionCount;
import io.kestra.core.models.executions.statistics.Flow;

View File

@@ -1,7 +1,6 @@
package io.kestra.core.repositories;
import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.models.topologies.FlowTopology;
import java.util.List;
@@ -14,6 +13,4 @@ public interface FlowTopologyRepositoryInterface {
List<FlowTopology> findAll(String tenantId);
FlowTopology save(FlowTopology flowTopology);
void save(FlowInterface flow, List<FlowTopology> flowTopologies);
}

View File

@@ -1,24 +0,0 @@
package io.kestra.core.repositories;
import io.kestra.core.lock.Lock;
import java.util.List;
import java.util.Optional;
/**
* Low lever repository for locks.
* It should never be used directly but only via the {@link io.kestra.core.lock.LockService}.
*/
public interface LockRepositoryInterface {
Optional<Lock> findById(String category, String id);
boolean create(Lock newLock);
default void delete(Lock existing) {
deleteById(existing.getCategory(), existing.getId());
}
void deleteById(String category, String id);
List<Lock> deleteByOwner(String owner);
}

View File

@@ -5,5 +5,7 @@ import java.util.List;
public interface SaveRepositoryInterface<T> {
T save(T item);
int saveBatch(List<T> items);
default int saveBatch(List<T> items) {
throw new UnsupportedOperationException();
}
}

View File

@@ -1,9 +1,7 @@
package io.kestra.core.repositories;
import io.kestra.core.runners.TransactionContext;
import io.kestra.core.server.Service;
import io.kestra.core.server.ServiceInstance;
import io.kestra.core.server.ServiceStateTransition;
import io.kestra.core.server.ServiceType;
import io.micronaut.data.model.Pageable;
@@ -11,7 +9,6 @@ import java.time.Instant;
import java.util.List;
import java.util.Optional;
import java.util.Set;
import java.util.function.BiConsumer;
import java.util.function.Function;
/**
@@ -61,6 +58,20 @@ public interface ServiceInstanceRepositoryInterface {
*/
ServiceInstance save(ServiceInstance service);
/**
* Finds all service instances which are in the given state.
*
* @return the list of {@link ServiceInstance}.
*/
List<ServiceInstance> findAllInstancesInState(final Service.ServiceState state);
/**
* Finds all service instances which are in the given state.
*
* @return the list of {@link ServiceInstance}.
*/
List<ServiceInstance> findAllInstancesInStates(final Set<Service.ServiceState> states);
/**
* Finds all service active instances between the given dates.
*
@@ -73,28 +84,6 @@ public interface ServiceInstanceRepositoryInterface {
final Instant from,
final Instant to);
/**
* Finds all service instances which are NOT {@link Service.ServiceState#RUNNING}, then process them using the consumer.
*/
void processAllNonRunningInstances(BiConsumer<TransactionContext, ServiceInstance> consumer);
/**
* Attempt to transition the state of a given service to a given new state.
* This method may not update the service if the transition is not valid.
*
* @param instance the service instance.
* @param newState the new state of the service.
* @return an optional of the {@link ServiceInstance} or {@link Optional#empty()} if the service is not running.
*/
ServiceStateTransition.Response mayTransitServiceTo(final TransactionContext txContext,
final ServiceInstance instance,
final Service.ServiceState newState,
final String reason);
/**
* Finds all service instances that are in the states, then process them using the consumer.
*/
void processInstanceInStates(Set<Service.ServiceState> states, BiConsumer<TransactionContext, ServiceInstance> consumer);
/**
* Purge all instances in the EMPTY state older than the until date.
*

View File

@@ -0,0 +1,42 @@
package io.kestra.core.repositories;
import io.micronaut.data.model.Pageable;
import io.kestra.core.models.templates.Template;
import java.util.List;
import java.util.Optional;
import jakarta.annotation.Nullable;
public interface TemplateRepositoryInterface {
Optional<Template> findById(String tenantId, String namespace, String id);
List<Template> findAll(String tenantId);
List<Template> findAllWithNoAcl(String tenantId);
List<Template> findAllForAllTenants();
ArrayListTotal<Template> find(
Pageable pageable,
@Nullable String query,
@Nullable String tenantId,
@Nullable String namespace
);
// Should normally be TemplateWithSource but it didn't exist yet
List<Template> find(
@Nullable String query,
@Nullable String tenantId,
@Nullable String namespace
);
List<Template> findByNamespace(String tenantId, String namespace);
Template create(Template template);
Template update(Template template, Template previous);
void delete(Template template);
List<String> findDistinctNamespace(String tenantId);
}

View File

@@ -0,0 +1,13 @@
package io.kestra.core.repositories;
import io.kestra.core.runners.WorkerJobRunning;
import java.util.Optional;
public interface WorkerJobRunningRepositoryInterface {
Optional<WorkerJobRunning> findByKey(String uid);
void deleteByKey(String uid);
}

View File

@@ -1,11 +1,9 @@
package io.kestra.core.runners;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.models.flows.FlowWithSource;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.services.FlowListenersInterface;
import io.kestra.core.services.PluginDefaultService;
import jakarta.inject.Singleton;
import java.util.Objects;
import lombok.Setter;
@@ -17,14 +15,12 @@ import java.util.Optional;
@Singleton
public class DefaultFlowMetaStore implements FlowMetaStoreInterface {
private final FlowRepositoryInterface flowRepository;
private final PluginDefaultService pluginDefaultService;
@Setter
private List<FlowWithSource> allFlows;
public DefaultFlowMetaStore(FlowListenersInterface flowListeners, FlowRepositoryInterface flowRepository, PluginDefaultService pluginDefaultService) {
public DefaultFlowMetaStore(FlowListenersInterface flowListeners, FlowRepositoryInterface flowRepository) {
this.flowRepository = flowRepository;
this.pluginDefaultService = pluginDefaultService;
flowListeners.listen(flows -> allFlows = flows);
}
@@ -57,9 +53,4 @@ public class DefaultFlowMetaStore implements FlowMetaStoreInterface {
public Boolean isReady() {
return true;
}
@Override
public Optional<FlowWithSource> findByExecutionThenInjectDefaults(Execution execution) {
return findByExecution(execution).map(it -> pluginDefaultService.injectDefaults(it, execution));
}
}

View File

@@ -1,89 +0,0 @@
package io.kestra.core.runners;
import io.kestra.core.metrics.MetricRegistry;
import io.kestra.core.repositories.FlowTopologyRepositoryInterface;
import io.micronaut.context.ApplicationContext;
import jakarta.annotation.PostConstruct;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import lombok.extern.slf4j.Slf4j;
import java.util.HashMap;
import java.util.Map;
/**
* This class is responsible to index the queue synchronously at message production time. It is used by the Queue itself<p>
* Some queue messages are batch-indexed asynchronously via the regular {@link io.kestra.core.runners.Indexer}
* which listen to (receive) those queue messages.
*/
@Slf4j
@Singleton
public class DefaultQueueIndexer implements QueueIndexer {
private volatile Map<Class<?>, QueueIndexerRepository<?>> repositories;
private final MetricRegistry metricRegistry;
private final ApplicationContext applicationContext;
@Inject
public DefaultQueueIndexer(ApplicationContext applicationContext) {
this.applicationContext = applicationContext;
this.metricRegistry = applicationContext.getBean(MetricRegistry.class);
}
private Map<Class<?>, QueueIndexerRepository<?>> getRepositories() {
if (repositories == null) {
synchronized (this) {
if (repositories == null) {
repositories = new HashMap<>();
applicationContext.getBeansOfType(QueueIndexerRepository.class)
.forEach(saveRepositoryInterface -> {
repositories.put(saveRepositoryInterface.getItemClass(), saveRepositoryInterface);
});
}
}
}
return repositories;
}
// FIXME this today limit this indexer to only JDBC queue and repository.
// to be able to use JDBC queue with another repository we would need to check in each QueueIndexerRepository if it's a Jdbc transaction before casting
@Override
public void accept(TransactionContext txContext, Object item) {
Map<Class<?>, QueueIndexerRepository<?>> repositories = getRepositories();
if (repositories.containsKey(item.getClass())) {
this.metricRegistry.counter(MetricRegistry.METRIC_INDEXER_REQUEST_COUNT, MetricRegistry.METRIC_INDEXER_REQUEST_COUNT_DESCRIPTION, "type", item.getClass().getName()).increment();
this.metricRegistry.counter(MetricRegistry.METRIC_INDEXER_MESSAGE_IN_COUNT, MetricRegistry.METRIC_INDEXER_MESSAGE_IN_COUNT_DESCRIPTION, "type", item.getClass().getName()).increment();
this.metricRegistry.timer(MetricRegistry.METRIC_INDEXER_REQUEST_DURATION, MetricRegistry.METRIC_INDEXER_REQUEST_DURATION_DESCRIPTION, "type", item.getClass().getName()).record(() -> {
QueueIndexerRepository<?> indexerRepository = repositories.get(item.getClass());
if (indexerRepository instanceof FlowTopologyRepositoryInterface) {
// we allow flow topology to fail indexation
try {
save(indexerRepository, txContext, item);
} catch (Exception e) {
log.error("Unable to index a flow topology, skipping it", e);
}
} else {
save(indexerRepository, txContext, cast(item));
}
this.metricRegistry.counter(MetricRegistry.METRIC_INDEXER_MESSAGE_OUT_COUNT, MetricRegistry.METRIC_INDEXER_MESSAGE_OUT_COUNT_DESCRIPTION, "type", item.getClass().getName()).increment();
});
}
}
private void save(QueueIndexerRepository<?> indexerRepository, TransactionContext txContext, Object item) {
if (indexerRepository.supports(txContext.getClass())) {
indexerRepository.save(txContext, cast(item));
} else {
indexerRepository.save(cast(item));
}
}
@SuppressWarnings("unchecked")
private static <T> T cast(Object message) {
return (T) message;
}
}

View File

@@ -511,6 +511,17 @@ public class DefaultRunContext extends RunContext {
}
}
/**
* {@inheritDoc}
*/
@Override
@SuppressWarnings("unchecked")
public String tenantId() {
Map<String, String> flow = (Map<String, String>) this.getVariables().get("flow");
// normally only tests should not have the flow variable
return flow != null ? flow.get("tenantId") : null;
}
/**
* {@inheritDoc}
*/

View File

@@ -1,17 +0,0 @@
package io.kestra.core.runners;
import io.kestra.core.models.HasUID;
import io.kestra.core.models.executions.Execution;
import java.time.Instant;
public record ExecutionEvent(String tenantId, String namespace, String flowId, String executionId, Instant eventDate, ExecutionEventType eventType) implements HasUID {
public ExecutionEvent(Execution execution, ExecutionEventType eventType) {
this(execution.getTenantId(), execution.getNamespace(), execution.getFlowId(), execution.getId(), Instant.now(), eventType);
}
@Override
public String uid() {
return executionId;
}
}

View File

@@ -1,7 +0,0 @@
package io.kestra.core.runners;
public enum ExecutionEventType {
CREATED,
UPDATED,
TERMINATED,
}

View File

@@ -1,29 +0,0 @@
package io.kestra.core.runners;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.runners.ExecutionQueued;
import io.kestra.core.runners.TransactionContext;
import java.util.function.BiConsumer;
/**
* This state store is used by the {@link Executor} to handle execution queued by flow concurrency limit.
*/
public interface ExecutionQueuedStateStore {
/**
* remove a queued execution.
*/
void remove(Execution execution);
/**
* Save a queued execution.
*
* @implNote Implementors that support transaction must use the provided {@link TransactionContext} to attach to the current transaction.
*/
void save(TransactionContext txContext, ExecutionQueued executionQueued);
/**
* Pop a queued execution: remove the oldest one and process it with the provided consumer.
*/
void pop(String tenantId, String namespace, String flowId, BiConsumer<TransactionContext, Execution> consumer);
}

View File

@@ -1,7 +1,197 @@
package io.kestra.core.runners;
import io.kestra.core.server.Service;
import com.fasterxml.jackson.annotation.JsonIgnore;
import io.kestra.core.models.executions.*;
import io.kestra.core.models.flows.FlowWithException;
import io.kestra.core.models.flows.FlowWithSource;
import io.kestra.core.models.flows.State;
import lombok.AllArgsConstructor;
import lombok.Getter;
public interface Executor extends Service, Runnable {
import java.util.ArrayList;
import java.util.List;
// TODO for 2.0: this class is used as a queue consumer (which should have been the ExecutorInterface instead),
// a queue message (only in Kafka) and an execution context.
// At some point, we should rename it to ExecutorContext and move it to the executor module,
// then rename the ExecutorInterface to just Executor (to be used as a queue consumer)
@Getter
@AllArgsConstructor
public class Executor {
private Execution execution;
private Exception exception;
private final List<String> from = new ArrayList<>();
private Long offset;
@JsonIgnore
private boolean executionUpdated = false;
private FlowWithSource flow;
private final List<TaskRun> nexts = new ArrayList<>();
private final List<WorkerTask> workerTasks = new ArrayList<>();
private final List<ExecutionDelay> executionDelays = new ArrayList<>();
private WorkerTaskResult joinedWorkerTaskResult;
private final List<SubflowExecution<?>> subflowExecutions = new ArrayList<>();
private final List<SubflowExecutionResult> subflowExecutionResults = new ArrayList<>();
private SubflowExecutionResult joinedSubflowExecutionResult;
private ExecutionRunning executionRunning;
private ExecutionResumed executionResumed;
private ExecutionResumed joinedExecutionResumed;
private final List<WorkerTrigger> workerTriggers = new ArrayList<>();
private WorkerJob workerJobToResubmit;
private State.Type originalState;
private SubflowExecutionEnd subflowExecutionEnd;
private SubflowExecutionEnd joinedSubflowExecutionEnd;
/**
* The sequence id should be incremented each time the execution is persisted after mutation.
*/
private long seqId = 0L;
/**
* List of {@link ExecutionKilled} to be propagated part of the execution.
*/
private List<ExecutionKilledExecution> executionKilled;
public Executor(Execution execution, Long offset) {
this.execution = execution;
this.offset = offset;
this.originalState = execution.getState().getCurrent();
}
public Executor(Execution execution, Long offset, long seqId) {
this.execution = execution;
this.offset = offset;
this.seqId = seqId;
this.originalState = execution.getState().getCurrent();
}
public Executor(WorkerTaskResult workerTaskResult) {
this.joinedWorkerTaskResult = workerTaskResult;
}
public Executor(SubflowExecutionResult subflowExecutionResult) {
this.joinedSubflowExecutionResult = subflowExecutionResult;
}
public Executor(SubflowExecutionEnd subflowExecutionEnd) {
this.joinedSubflowExecutionEnd = subflowExecutionEnd;
}
public Executor(WorkerJob workerJob) {
this.workerJobToResubmit = workerJob;
}
public Executor(ExecutionResumed executionResumed) {
this.joinedExecutionResumed = executionResumed;
}
public Executor(List<ExecutionKilledExecution> executionKilled) {
this.executionKilled = executionKilled;
}
public Boolean canBeProcessed() {
return !(this.getException() != null || this.getFlow() == null || this.getFlow() instanceof FlowWithException || this.getFlow().getTasks() == null ||
this.getExecution().isDeleted() || this.getExecution().getState().isPaused() || this.getExecution().getState().isBreakpoint() || this.getExecution().getState().isQueued());
}
public Executor withFlow(FlowWithSource flow) {
this.flow = flow;
return this;
}
public Executor withExecution(Execution execution, String from) {
this.execution = execution;
this.from.add(from);
this.executionUpdated = true;
return this;
}
public Executor withException(Exception exception, String from) {
this.exception = exception;
this.from.add(from);
return this;
}
public Executor withTaskRun(List<TaskRun> taskRuns, String from) {
this.nexts.addAll(taskRuns);
this.from.add(from);
return this;
}
public Executor withWorkerTasks(List<WorkerTask> workerTasks, String from) {
this.workerTasks.addAll(workerTasks);
this.from.add(from);
return this;
}
public Executor withWorkerTriggers(List<WorkerTrigger> workerTriggers, String from) {
this.workerTriggers.addAll(workerTriggers);
this.from.add(from);
return this;
}
public Executor withWorkerTaskDelays(List<ExecutionDelay> executionDelays, String from) {
this.executionDelays.addAll(executionDelays);
this.from.add(from);
return this;
}
public Executor withSubflowExecutions(List<SubflowExecution<?>> subflowExecutions, String from) {
this.subflowExecutions.addAll(subflowExecutions);
this.from.add(from);
return this;
}
public Executor withSubflowExecutionResults(List<SubflowExecutionResult> subflowExecutionResults, String from) {
this.subflowExecutionResults.addAll(subflowExecutionResults);
this.from.add(from);
return this;
}
public Executor withExecutionRunning(ExecutionRunning executionRunning) {
this.executionRunning = executionRunning;
return this;
}
public Executor withExecutionResumed(ExecutionResumed executionResumed) {
this.executionResumed = executionResumed;
return this;
}
public Executor withExecutionKilled(final List<ExecutionKilledExecution> executionKilled) {
this.executionKilled = executionKilled;
return this;
}
public Executor withSubflowExecutionEnd(SubflowExecutionEnd subflowExecutionEnd) {
this.subflowExecutionEnd = subflowExecutionEnd;
return this;
}
public Executor serialize() {
return new Executor(
this.execution,
this.offset,
this.seqId
);
}
/**
* Increments and returns the execution sequence id.
*
* @return the sequence id.
*/
public long incrementAndGetSeqId() {
this.seqId++;
return seqId;
}
}

View File

@@ -0,0 +1,7 @@
package io.kestra.core.runners;
import io.kestra.core.server.Service;
public interface ExecutorInterface extends Service, Runnable {
}

View File

@@ -0,0 +1,21 @@
package io.kestra.core.runners;
import io.kestra.core.models.flows.State;
import lombok.Data;
import lombok.NoArgsConstructor;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
@Data
@NoArgsConstructor
public class ExecutorState {
private String executionId;
private Map<String, State.Type> workerTaskDeduplication = new ConcurrentHashMap<>();
private Map<String, String> childDeduplication = new ConcurrentHashMap<>();
private Map<String, State.Type> subflowExecutionDeduplication = new ConcurrentHashMap<>();
public ExecutorState(String executionId) {
this.executionId = executionId;
}
}

View File

@@ -82,8 +82,7 @@ public abstract class FilesService {
}
private static String resolveUniqueNameForFile(final Path path) {
String filename = path.getFileName().toString();
String encodedFilename = java.net.URLEncoder.encode(filename, java.nio.charset.StandardCharsets.UTF_8);
return IdUtils.from(path.toString()) + "-" + encodedFilename;
String filename = path.getFileName().toString().replace(' ', '+');
return IdUtils.from(path.toString()) + "-" + filename;
}
}

View File

@@ -64,11 +64,11 @@ import static io.kestra.core.utils.Rethrow.throwFunction;
public class FlowInputOutput {
private static final Pattern URI_PATTERN = Pattern.compile("^[a-z]+:\\/\\/(?:www\\.)?[-a-zA-Z0-9@:%._\\+~#=]{1,256}\\.[a-zA-Z0-9()]{1,6}\\b(?:[-a-zA-Z0-9()@:%_\\+.~#?&\\/=]*)$");
private static final ObjectMapper YAML_MAPPER = JacksonMapper.ofYaml();
private final StorageInterface storageInterface;
private final Optional<String> secretKey;
private final RunContextFactory runContextFactory;
@Inject
public FlowInputOutput(
StorageInterface storageInterface,
@@ -79,7 +79,7 @@ public class FlowInputOutput {
this.runContextFactory = runContextFactory;
this.secretKey = Optional.ofNullable(secretKey);
}
/**
* Validate all the inputs of a given execution of a flow.
*
@@ -93,11 +93,11 @@ public class FlowInputOutput {
final Execution execution,
final Publisher<CompletedPart> data) {
if (ListUtils.isEmpty(inputs)) return Mono.just(Collections.emptyList());
return readData(inputs, execution, data, false)
.map(inputData -> resolveInputs(inputs, flow, execution, inputData, false));
}
/**
* Reads all the inputs of a given execution of a flow.
*
@@ -111,7 +111,7 @@ public class FlowInputOutput {
final Publisher<CompletedPart> data) {
return this.readExecutionInputs(flow.getInputs(), flow, execution, data);
}
/**
* Reads all the inputs of a given execution of a flow.
*
@@ -126,7 +126,7 @@ public class FlowInputOutput {
final Publisher<CompletedPart> data) {
return readData(inputs, execution, data, true).map(inputData -> this.readExecutionInputs(inputs, flow, execution, inputData));
}
private Mono<Map<String, Object>> readData(List<Input<?>> inputs, Execution execution, Publisher<CompletedPart> data, boolean uploadFiles) {
return Flux.from(data)
.publishOn(Schedulers.boundedElastic())
@@ -142,7 +142,7 @@ public class FlowInputOutput {
runContext.logger().warn("Using a deprecated way to upload a FILE input. You must set the input 'id' as part name and set the name of the file using the regular 'filename' part attribute.");
}
String inputId = oldStyleInput ? fileUpload.getFilename() : fileUpload.getName();
String fileName = oldStyleInput ? FileInput.DEFAULT_EXTENSION : fileUpload.getFilename();
String fileName = oldStyleInput ? FileInput.findFileInputExtension(inputs, fileUpload.getFilename()) : fileUpload.getFilename();
if (!uploadFiles) {
URI from = URI.create("kestra://" + StorageContext
@@ -153,7 +153,7 @@ public class FlowInputOutput {
sink.next(Map.entry(inputId, from.toString()));
} else {
try {
final String fileExtension = FileInput.DEFAULT_EXTENSION;
final String fileExtension = FileInput.findFileInputExtension(inputs, fileName);
String prefix = StringUtils.leftPad(fileName + "_", 3, "_");
File tempFile = File.createTempFile(prefix, fileExtension);
@@ -235,7 +235,7 @@ public class FlowInputOutput {
}
return MapUtils.flattenToNestedMap(resolved);
}
/**
* Utility method for retrieving types inputs.
*
@@ -252,7 +252,7 @@ public class FlowInputOutput {
) {
return resolveInputs(inputs, flow, execution, data, true);
}
public List<InputAndValue> resolveInputs(
final List<Input<?>> inputs,
final FlowInterface flow,
@@ -325,7 +325,7 @@ public class FlowInputOutput {
}
});
resolvable.setInput(input);
Object value = resolvable.get().value();
// resolve default if needed
@@ -366,10 +366,10 @@ public class FlowInputOutput {
public static Object resolveDefaultValue(Input<?> input, PropertyContext renderer) throws IllegalVariableEvaluationException {
return switch (input.getType()) {
case STRING, SELECT, SECRET, EMAIL -> resolveDefaultPropertyAs(input, renderer, String.class);
case STRING, ENUM, SELECT, SECRET, EMAIL -> resolveDefaultPropertyAs(input, renderer, String.class);
case INT -> resolveDefaultPropertyAs(input, renderer, Integer.class);
case FLOAT -> resolveDefaultPropertyAs(input, renderer, Float.class);
case BOOL -> resolveDefaultPropertyAs(input, renderer, Boolean.class);
case BOOLEAN, BOOL -> resolveDefaultPropertyAs(input, renderer, Boolean.class);
case DATETIME -> resolveDefaultPropertyAs(input, renderer, Instant.class);
case DATE -> resolveDefaultPropertyAs(input, renderer, LocalDate.class);
case TIME -> resolveDefaultPropertyAs(input, renderer, LocalTime.class);
@@ -478,7 +478,7 @@ public class FlowInputOutput {
private Object parseType(Execution execution, Type type, String id, Type elementType, Object current) throws Exception {
try {
return switch (type) {
case SELECT, STRING, EMAIL -> current.toString();
case SELECT, ENUM, STRING, EMAIL -> current.toString();
case SECRET -> {
if (secretKey.isEmpty()) {
throw new Exception("Unable to use a `SECRET` input/output as encryption is not configured");
@@ -488,6 +488,7 @@ public class FlowInputOutput {
case INT -> current instanceof Integer ? current : Integer.valueOf(current.toString());
// Assuming that after the render we must have a double/int, so we can safely use its toString representation
case FLOAT -> current instanceof Float ? current : Float.valueOf(current.toString());
case BOOLEAN -> current instanceof Boolean ? current : Boolean.valueOf(current.toString());
case BOOL -> current instanceof Boolean ? current : Boolean.valueOf(current.toString());
case DATETIME -> current instanceof Instant ? current : Instant.parse(current.toString());
case DATE -> current instanceof LocalDate ? current : LocalDate.parse(current.toString());

View File

@@ -49,6 +49,4 @@ public interface FlowMetaStoreInterface {
Optional.of(execution.getFlowRevision())
);
}
Optional<FlowWithSource> findByExecutionThenInjectDefaults(Execution execution);
}

View File

@@ -1,14 +0,0 @@
package io.kestra.core.runners;
public final class NoTransactionContext implements TransactionContext {
public static final NoTransactionContext INSTANCE = new NoTransactionContext();
private NoTransactionContext() {
// should only have one instance
}
@Override
public <T extends TransactionContext> boolean supports(Class<T> clazz) {
return NoTransactionContext.class.isAssignableFrom(clazz);
}
}

View File

@@ -1,5 +0,0 @@
package io.kestra.core.runners;
public interface QueueIndexer {
void accept(TransactionContext txContext, Object item);
}

View File

@@ -1,11 +0,0 @@
package io.kestra.core.runners;
public interface QueueIndexerRepository<T> {
T save(TransactionContext txContext, T message);
T save(T item);
Class<T> getItemClass();
<TX extends TransactionContext> boolean supports(Class<TX> clazz);
}

View File

@@ -7,6 +7,7 @@ import io.kestra.core.exceptions.IllegalVariableEvaluationException;
import io.kestra.core.models.executions.AbstractMetricEntry;
import io.kestra.core.models.property.Property;
import io.kestra.core.models.property.PropertyContext;
import io.kestra.core.storages.StateStore;
import io.kestra.core.storages.Storage;
import io.kestra.core.storages.kv.KVStore;
import org.slf4j.Logger;
@@ -135,6 +136,12 @@ public abstract class RunContext implements PropertyContext {
*/
public abstract void cleanup();
/**
* @deprecated use flowInfo().tenantId() instead
*/
@Deprecated(forRemoval = true)
public abstract String tenantId();
public abstract FlowInfo flowInfo();
/**
@@ -169,6 +176,14 @@ public abstract class RunContext implements PropertyContext {
*/
public abstract KVStore namespaceKv(String namespace);
/**
* @deprecated use #namespaceKv(String) instead
*/
@Deprecated(since = "1.1.0", forRemoval = true)
public StateStore stateStore() {
return new StateStore(this, true);
}
/**
* Get access to local paths of the host machine.
*/

View File

@@ -8,7 +8,6 @@ import io.kestra.core.models.flows.FlowInterface;
import io.kestra.core.queues.QueueException;
import io.kestra.core.queues.QueueFactoryInterface;
import io.kestra.core.queues.QueueInterface;
import io.kestra.core.repositories.ExecutionRepositoryInterface;
import io.kestra.core.repositories.FlowRepositoryInterface;
import io.kestra.core.services.ExecutionService;
import io.kestra.core.utils.Await;
@@ -35,16 +34,9 @@ public class RunnerUtils {
@Named(QueueFactoryInterface.EXECUTION_NAMED)
protected QueueInterface<Execution> executionQueue;
@Inject
@Named(QueueFactoryInterface.EXECUTION_EVENT_NAMED)
protected QueueInterface<ExecutionEvent> executionEventQueue;
@Inject
private FlowRepositoryInterface flowRepository;
@Inject
private ExecutionRepositoryInterface executionRepository;
@Inject
private ExecutionService executionService;
@@ -158,10 +150,9 @@ public class RunnerUtils {
public Execution awaitExecution(Predicate<Execution> predicate, Runnable executionEmitter, Duration duration) throws TimeoutException {
AtomicReference<Execution> receive = new AtomicReference<>();
Runnable cancel = this.executionEventQueue.receive(null, current -> {
var execution = executionRepository.findById(current.getLeft().tenantId(), current.getLeft().executionId()).orElseThrow();
if (predicate.test(execution)) {
receive.set(execution);
Runnable cancel = this.executionQueue.receive(null, current -> {
if (predicate.test(current.getLeft())) {
receive.set(current.getLeft());
}
}, false);

View File

@@ -1,13 +0,0 @@
package io.kestra.core.runners;
public interface TransactionContext {
default <T extends TransactionContext> T unwrap(Class<T> clazz) {
if (clazz.isInstance(this)) {
return clazz.cast(this);
}
throw new IllegalArgumentException("Cannot unwrap " + this.getClass().getName() + " to " + clazz.getName());
}
<T extends TransactionContext> boolean supports(Class<T> clazz);
}

View File

@@ -1,6 +1,5 @@
package io.kestra.core.runners;
import io.micronaut.context.annotation.Requires;
import io.micronaut.context.annotation.Secondary;
import jakarta.inject.Singleton;
@@ -11,7 +10,7 @@ import java.util.Set;
*
* @see io.kestra.core.models.tasks.WorkerGroup
*/
public interface WorkerGroupMetaStore {
public interface WorkerGroupExecutorInterface {
/**
* Checks whether a Worker Group exists for the given key and tenant.
@@ -40,12 +39,12 @@ public interface WorkerGroupMetaStore {
Set<String> listAllWorkerGroupKeys();
/**
* Default {@link WorkerGroupMetaStore} implementation.
* Default {@link WorkerGroupExecutorInterface} implementation.
* This class is only used if no other implementation exist.
*/
@Singleton
@Requires(missingBeans = WorkerGroupMetaStore.class)
class DefaultWorkerGroupMetaStore implements WorkerGroupMetaStore {
@Secondary
class DefaultWorkerGroupExecutorInterface implements WorkerGroupExecutorInterface {
@Override
public boolean isWorkerGroupExistForKey(String key, String tenant) {

View File

@@ -0,0 +1,4 @@
package io.kestra.core.runners;
public class WorkerJobResubmit {
}

View File

@@ -97,6 +97,7 @@ public class Extension extends AbstractExtension {
filters.put("timestampNano", new TimestampNanoFilter());
filters.put("jq", new JqFilter());
filters.put("escapeChar", new EscapeCharFilter());
filters.put("json", new JsonFilter());
filters.put("toJson", new ToJsonFilter());
filters.put("distinct", new DistinctFilter());
filters.put("keys", new KeysFilter());
@@ -137,6 +138,7 @@ public class Extension extends AbstractExtension {
Map<String, Function> functions = new HashMap<>();
functions.put("now", new NowFunction());
functions.put("json", new JsonFunction());
functions.put("fromJson", new FromJsonFunction());
functions.put("currentEachOutput", new CurrentEachOutputFunction());
functions.put(SecretFunction.NAME, secretFunction);

View File

@@ -0,0 +1,17 @@
package io.kestra.core.runners.pebble.filters;
import io.pebbletemplates.pebble.error.PebbleException;
import io.pebbletemplates.pebble.template.EvaluationContext;
import io.pebbletemplates.pebble.template.PebbleTemplate;
import lombok.extern.slf4j.Slf4j;
import java.util.Map;
@Slf4j
@Deprecated
public class JsonFilter extends ToJsonFilter {
@Override
public Object apply(Object input, Map<String, Object> args, PebbleTemplate self, EvaluationContext context, int lineNumber) throws PebbleException {
return super.apply(input, args, self, context, lineNumber);
}
}

View File

@@ -151,10 +151,7 @@ abstract class AbstractFileFunction implements Function {
// if there is a trigger of type execution, we also allow accessing a file from the parent execution
Map<String, String> trigger = (Map<String, String>) context.getVariable(TRIGGER);
if (!isFileUriValid(trigger.get(NAMESPACE), trigger.get("flowId"), trigger.get("executionId"), path)) {
throw new IllegalArgumentException("Unable to read the file '" + path + "' as it didn't belong to the parent execution");
}
return true;
return isFileUriValid(trigger.get(NAMESPACE), trigger.get("flowId"), trigger.get("executionId"), path);
}
return false;
}

View File

@@ -0,0 +1,16 @@
package io.kestra.core.runners.pebble.functions;
import io.pebbletemplates.pebble.template.EvaluationContext;
import io.pebbletemplates.pebble.template.PebbleTemplate;
import lombok.extern.slf4j.Slf4j;
import java.util.Map;
@Slf4j
@Deprecated
public class JsonFunction extends FromJsonFunction {
@Override
public Object execute(Map<String, Object> args, PebbleTemplate self, EvaluationContext context, int lineNumber) {
return super.execute(args, self, context, lineNumber);
}
}

View File

@@ -36,6 +36,44 @@ public final class FileSerde {
}
}
/**
* @deprecated use the {@link #readAll(Reader)} method instead.
*/
@Deprecated(since = "0.19", forRemoval = true)
public static Consumer<FluxSink<Object>> reader(BufferedReader input) {
return s -> {
String row;
try {
while ((row = input.readLine()) != null) {
s.next(convert(row));
}
s.complete();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
};
}
/**
* @deprecated use the {@link #readAll(Reader, Class)} method instead.
*/
@Deprecated(since = "0.19", forRemoval = true)
public static <T> Consumer<FluxSink<T>> reader(BufferedReader input, Class<T> cls) {
return s -> {
String row;
try {
while ((row = input.readLine()) != null) {
s.next(convert(row, cls));
}
s.complete();
} catch (IOException e) {
throw new UncheckedIOException(e);
}
};
}
public static void reader(BufferedReader input, Consumer<Object> consumer) throws IOException {
String row;
while ((row = input.readLine()) != null) {

View File

@@ -0,0 +1,233 @@
package io.kestra.core.server;
import io.kestra.core.repositories.ServiceInstanceRepositoryInterface;
import io.micronaut.core.annotation.Introspected;
import jakarta.inject.Inject;
import lombok.extern.slf4j.Slf4j;
import java.time.Duration;
import java.time.Instant;
import java.util.ArrayList;
import java.util.List;
import java.util.Objects;
import java.util.Optional;
import java.util.Random;
import java.util.Set;
import static io.kestra.core.server.Service.ServiceState.*;
/**
* Base class for coordinating service liveness.
*/
@Introspected
@Slf4j
public abstract class AbstractServiceLivenessCoordinator extends AbstractServiceLivenessTask {
private final static int DEFAULT_SCHEDULE_JITTER_MAX_MS = 500;
protected static String DEFAULT_REASON_FOR_DISCONNECTED =
"The service was detected as non-responsive after the session timeout. " +
"Service transitioned to the 'DISCONNECTED' state.";
protected static String DEFAULT_REASON_FOR_NOT_RUNNING =
"The service was detected as non-responsive or terminated after termination grace period. " +
"Service transitioned to the 'NOT_RUNNING' state.";
private static final String TASK_NAME = "service-liveness-coordinator-task";
protected final ServiceLivenessStore store;
protected final ServiceRegistry serviceRegistry;
// mutable for testing purpose
protected String serverId = ServerInstance.INSTANCE_ID;
/**
* Creates a new {@link AbstractServiceLivenessCoordinator} instance.
*
* @param store The {@link ServiceInstanceRepositoryInterface}.
* @param serverConfig The server configuration.
*/
@Inject
public AbstractServiceLivenessCoordinator(final ServiceLivenessStore store,
final ServiceRegistry serviceRegistry,
final ServerConfig serverConfig) {
super(TASK_NAME, serverConfig);
this.serviceRegistry = serviceRegistry;
this.store = store;
}
/**
* {@inheritDoc}
**/
@Override
protected void onSchedule(Instant now) throws Exception {
if (Optional.ofNullable(serviceRegistry.get(ServiceType.EXECUTOR))
.filter(service -> service.instance().is(RUNNING))
.isEmpty()) {
log.debug(
"The liveness coordinator task was temporarily disabled. Executor is not yet in the RUNNING state."
);
return;
}
// Update all RUNNING but non-responding services to DISCONNECTED.
handleAllNonRespondingServices(now);
// Handle all workers which are not in a RUNNING state.
handleAllWorkersForUncleanShutdown(now);
// Update all services one of the TERMINATED states to NOT_RUNNING.
handleAllServicesForTerminatedStates(now);
// Update all services in NOT_RUNNING to EMPTY (a.k.a soft delete).
handleAllServiceInNotRunningState();
maybeDetectAndLogNewConnectedServices();
}
/**
* Handles all unresponsive services and update their status to disconnected.
* <p>
* This method may re-submit tasks is necessary.
*
* @param now the time of the execution.
*/
protected abstract void handleAllNonRespondingServices(final Instant now);
/**
* Handles all worker services which are shutdown or considered to be terminated.
* <p>
* This method may re-submit tasks is necessary.
*
* @param now the time of the execution.
*/
protected abstract void handleAllWorkersForUncleanShutdown(final Instant now);
protected abstract void update(final ServiceInstance instance,
final Service.ServiceState state,
final String reason) ;
/**
* {@inheritDoc}
**/
@Override
protected Duration getScheduleInterval() {
// Multiple Executors can be running in parallel. We add a jitter to
// help distributing the load more evenly among the ServiceLivenessCoordinator.
// This is also used to prevent all ServiceLivenessCoordinator from attempting to query the repository simultaneously.
Random r = new Random(); //SONAR
int jitter = r.nextInt(DEFAULT_SCHEDULE_JITTER_MAX_MS);
return serverConfig.liveness().interval().plus(Duration.ofMillis(jitter));
}
protected List<ServiceInstance> filterAllUncleanShutdownServices(final List<ServiceInstance> instances,
final Instant now) {
// List of services for which we don't know the actual state
final List<ServiceInstance> uncleanShutdownServices = new ArrayList<>();
// ...all services that have transitioned to DISCONNECTED or TERMINATING for more than terminationGracePeriod.
uncleanShutdownServices.addAll(instances.stream()
.filter(nonRunning -> nonRunning.state().isDisconnectedOrTerminating())
.filter(disconnectedOrTerminating -> disconnectedOrTerminating.isTerminationGracePeriodElapsed(now))
.peek(instance -> maybeLogNonRespondingAfterTerminationGracePeriod(instance, now))
.toList()
);
// ...all services that have transitioned to TERMINATED_FORCED.
uncleanShutdownServices.addAll(instances.stream()
.filter(nonRunning -> nonRunning.is(Service.ServiceState.TERMINATED_FORCED))
// Only select workers that have been terminated for at least the grace period, to ensure that all in-flight
// task runs had enough time to be fully handled by the executors.
.filter(terminated -> terminated.isTerminationGracePeriodElapsed(now))
.toList()
);
return uncleanShutdownServices;
}
protected List<ServiceInstance> filterAllNonRespondingServices(final List<ServiceInstance> instances,
final Instant now) {
return instances.stream()
.filter(instance -> Objects.nonNull(instance.config())) // protect against non-complete instance
.filter(instance -> instance.config().liveness().enabled())
.filter(instance -> instance.isSessionTimeoutElapsed(now))
// exclude any service running on the same server as the executor, to prevent the latter from shutting down.
.filter(instance -> !instance.server().id().equals(serverId))
// only keep services eligible for liveness probe
.filter(instance -> {
final Instant minInstantForLivenessProbe = now.minus(instance.config().liveness().initialDelay());
return instance.createdAt().isBefore(minInstantForLivenessProbe);
})
// warn
.peek(instance -> log.warn("Detected non-responding service [id={}, type={}, hostname={}] after timeout ({}ms).",
instance.uid(),
instance.type(),
instance.server().hostname(),
now.toEpochMilli() - instance.updatedAt().toEpochMilli()
))
.toList();
}
protected void handleAllServiceInNotRunningState() {
// Soft delete all services which are NOT_RUNNING anymore.
store.findAllInstancesInStates(Set.of(Service.ServiceState.NOT_RUNNING))
.forEach(instance -> safelyUpdate(instance, Service.ServiceState.INACTIVE, null));
}
protected void handleAllServicesForTerminatedStates(final Instant now) {
store
.findAllInstancesInStates(Set.of(DISCONNECTED, TERMINATING, TERMINATED_GRACEFULLY, TERMINATED_FORCED))
.stream()
.filter(instance -> !instance.is(ServiceType.WORKER)) // WORKERS are handle above.
.filter(instance -> instance.isTerminationGracePeriodElapsed(now))
.peek(instance -> maybeLogNonRespondingAfterTerminationGracePeriod(instance, now))
.forEach(instance -> safelyUpdate(instance, NOT_RUNNING, DEFAULT_REASON_FOR_NOT_RUNNING));
}
protected void maybeDetectAndLogNewConnectedServices() {
if (log.isDebugEnabled()) {
// Log the newly-connected services (useful for troubleshooting).
store.findAllInstancesInStates(Set.of(CREATED, RUNNING))
.stream()
.filter(instance -> instance.createdAt().isAfter(lastScheduledExecution()))
.forEach(instance -> {
log.debug("Detected new service [id={}, type={}, hostname={}] (started at: {}).",
instance.uid(),
instance.type(),
instance.server().hostname(),
instance.createdAt()
);
});
}
}
protected void safelyUpdate(final ServiceInstance instance,
final Service.ServiceState state,
final String reason) {
try {
update(instance, state, reason);
} catch (Exception e) {
// Log and ignore exception - it's safe to ignore error because the run() method is supposed to schedule at fix rate.
log.error("Unexpected error while service [id={}, type={}, hostname={}] transition from {} to {}.",
instance.uid(),
instance.type(),
instance.server().hostname(),
instance.state(),
state,
e
);
}
}
protected static void maybeLogNonRespondingAfterTerminationGracePeriod(final ServiceInstance instance,
final Instant now) {
if (instance.state().isDisconnectedOrTerminating()) {
log.warn("Detected non-responding service [id={}, type={}, hostname={}] after termination grace period ({}ms).",
instance.uid(),
instance.type(),
instance.server().hostname(),
now.toEpochMilli() - instance.updatedAt().toEpochMilli()
);
}
}
}

View File

@@ -1,86 +0,0 @@
package io.kestra.core.server;
import io.kestra.core.repositories.ServiceInstanceRepositoryInterface;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import org.apache.commons.lang3.tuple.ImmutablePair;
import java.time.Instant;
import java.util.Optional;
@Singleton
public class DefaultServiceLivenessUpdater implements ServiceLivenessUpdater {
private final ServiceInstanceRepositoryInterface serviceInstanceRepository;
@Inject
public DefaultServiceLivenessUpdater(ServiceInstanceRepositoryInterface serviceInstanceRepository) {
this.serviceInstanceRepository = serviceInstanceRepository;
}
@Override
public void update(ServiceInstance service) {
this.serviceInstanceRepository.save(service);
}
@Override
public ServiceStateTransition.Response update(final ServiceInstance instance,
final Service.ServiceState newState,
final String reason) {
// FIXME this is a quick & dirty refacto to make it work cross queue but it needs to be carefully reworked to allow transaction if supported
return mayTransitServiceTo(instance, newState, reason);
}
/**
* Attempt to transition the state of a given service to given new state.
* This method may not update the service if the transition is not valid.
*
* @param instance the service instance.
* @param newState the new state of the service.
* @return an optional of the {@link ServiceInstance} or {@link Optional#empty()} if the service is not running.
*/
private ServiceStateTransition.Response mayTransitServiceTo(final ServiceInstance instance,
final Service.ServiceState newState,
final String reason) {
ImmutablePair<ServiceInstance, ServiceInstance> result = mayUpdateStatusById(
instance,
newState,
reason
);
return ServiceStateTransition.logTransitionAndGetResponse(instance, newState, result);
}
/**
* Attempt to transition the state of a given service to given new state.
* This method may not update the service if the transition is not valid.
*
* @param instance the new service instance.
* @param newState the new state of the service.
* @return an {@link Optional} of {@link ImmutablePair} holding the old (left), and new {@link ServiceInstance} or {@code null} if transition failed (right).
* Otherwise, an {@link Optional#empty()} if the no service can be found.
*/
private ImmutablePair<ServiceInstance, ServiceInstance> mayUpdateStatusById(final ServiceInstance instance,
final Service.ServiceState newState,
final String reason) {
// Find the ServiceInstance to be updated
Optional<ServiceInstance> optional = serviceInstanceRepository.findById(instance.uid());
// Check whether service was found.
if (optional.isEmpty()) {
return null;
}
// Check whether the status transition is valid before saving.
final ServiceInstance before = optional.get();
if (before.state().isValidTransition(newState)) {
ServiceInstance updated = before
.state(newState, Instant.now(), reason)
.server(instance.server())
.metrics(instance.metrics());
// Synchronize
update(updated);
return new ImmutablePair<>(before, updated);
}
return new ImmutablePair<>(before, null);
}
}

View File

@@ -3,7 +3,6 @@ package io.kestra.core.server;
import com.google.common.annotations.VisibleForTesting;
import io.kestra.core.contexts.KestraContext;
import io.kestra.core.models.ServerType;
import io.kestra.core.repositories.ServiceInstanceRepositoryInterface;
import io.kestra.core.server.ServiceStateTransition.Result;
import io.micronaut.context.annotation.Context;
import io.micronaut.context.annotation.Requires;
@@ -28,7 +27,6 @@ import static io.kestra.core.server.ServiceLivenessManager.OnStateTransitionFail
*/
@Context
@Requires(beans = ServiceLivenessUpdater.class)
@Requires(beans = ServiceInstanceRepositoryInterface.class)
public class ServiceLivenessManager extends AbstractServiceLivenessTask {
private static final Logger log = LoggerFactory.getLogger(ServiceLivenessManager.class);
@@ -252,10 +250,10 @@ public class ServiceLivenessManager extends AbstractServiceLivenessTask {
stateLock.lock();
// Optional callback to be executed at the end.
Runnable returnCallback = null;
localServiceState = localServiceState(service);
try {
if (localServiceState == null) {
return null; // service has been unregistered.
}

View File

@@ -10,25 +10,16 @@ import java.util.Set;
*
* @see ServiceInstance
* @see ServiceLivenessUpdater
* @see DefaultServiceLivenessCoordinator
* @see AbstractServiceLivenessCoordinator
*/
public interface ServiceLivenessStore {
/**
* Finds all service instances that are in one of the given states.
* Finds all service instances which are in one of the given states.
*
* @param states the state of services.
*
* @return the list of {@link ServiceInstance}.
*/
List<ServiceInstance> findAllInstancesInStates(Set<ServiceState> states);
/**
* Finds all service instances that in the given state.
*
* @param state the state of services.
*
* @return the list of {@link ServiceInstance}.
*/
List<ServiceInstance> findAllInstancesInState(ServiceState state);
}

View File

@@ -6,7 +6,7 @@ import java.util.Optional;
* Service interface for updating the state of a service instance.
*
* @see ServiceLivenessManager
* @see DefaultServiceLivenessCoordinator
* @see AbstractServiceLivenessCoordinator
*/
public interface ServiceLivenessUpdater {
@@ -51,5 +51,4 @@ public interface ServiceLivenessUpdater {
ServiceStateTransition.Response update(final ServiceInstance instance,
final Service.ServiceState newState,
final String reason);
}

View File

@@ -2,41 +2,43 @@ package io.kestra.core.services;
import io.kestra.core.models.executions.Execution;
import io.kestra.core.models.flows.State;
import io.kestra.core.runners.ExecutionQueuedStateStore;
import jakarta.inject.Inject;
import io.kestra.core.queues.QueueException;
import io.kestra.core.runners.ConcurrencyLimit;
import jakarta.inject.Singleton;
import java.util.EnumSet;
import java.util.List;
import java.util.Optional;
import java.util.Set;
@Singleton
public class ConcurrencyLimitService {
/**
* Contains methods to manage concurrency limit.
* This is designed to be used by the API, the executor use lower level primitives.
*/
public interface ConcurrencyLimitService {
private static final Set<State.Type> VALID_TARGET_STATES =
Set<State.Type> VALID_TARGET_STATES =
EnumSet.of(State.Type.RUNNING, State.Type.CANCELLED, State.Type.FAILED);
@Inject
private ExecutionQueuedStateStore executionQueuedStateStore;
/**
* Unqueue a queued execution.
*
* @throws IllegalArgumentException in case the execution is not queued or is transitionned to an unsupported state.
* @throws IllegalArgumentException in case the execution is not queued.
*/
public Execution unqueue(Execution execution, State.Type state) {
if (execution.getState().getCurrent() != State.Type.QUEUED) {
throw new IllegalArgumentException("Only QUEUED execution can be unqueued");
}
Execution unqueue(Execution execution, State.Type state) throws QueueException;
state = (state == null) ? State.Type.RUNNING : state;
/**
* Find concurrency limits.
*/
List<ConcurrencyLimit> find(String tenantId);
// Validate the target state, throwing an exception if the state is invalid
if (!VALID_TARGET_STATES.contains(state)) {
throw new IllegalArgumentException("Invalid target state: " + state + ". Valid states are: " + VALID_TARGET_STATES);
}
/**
* Update a concurrency limit.
*/
ConcurrencyLimit update(ConcurrencyLimit concurrencyLimit);
executionQueuedStateStore.remove(execution);
return execution.withState(state);
}
/**
* Find a concurrency limit by its identifier.
*/
Optional<ConcurrencyLimit> findById(String tenantId, String namespace, String flowId);
}

Some files were not shown because too many files have changed in this diff Show More