Jeremy
09/19/2022, 1:13 PMAverell
09/19/2022, 1:28 PMrun_coordinator:
module: dagster.core.run_coordinator
class: QueuedRunCoordinator
config:
max_concurrent_runs: 20
tag_concurrency_limits:
- key: "dagster/partition_set"
value:
applyLimitPerUniqueValue: true
limit: 1
<<< I tried something like this, and it seemed working as expected (preventing from having multiple runs of the same job).
However, if your next run also takes more than 1 hour, then your runs will queue up, I guess.daniel
09/19/2022, 3:27 PMfrom dagster import schedule, DagsterRunStatus, RunsFilter, RunRequest, SkipReason
@schedule(
...
)
def my_schedule(context):
runs = context.instance.get_runs(
filters=RunsFilter(
job_name="your_job_name",
statuses=[DagsterRunStatus.STARTED]
)
)
if runs:
return SkipReason("Run for this job still happening")
return [
RunRequest(
run_key=None,
run_config={}
)
]
Averell
09/20/2022, 10:10 PM