is there a way to not start a job if the same job ...
# ask-community
j
is there a way to not start a job if the same job is still running from an earlier scheduled run? In other words. I start a job every hour, but for some reason one run is a little flakey and takes 1:20 to finish. I want to not run the next one (no simultaneous runs for the same job).
a
Copy code
run_coordinator:
  module: dagster.core.run_coordinator
  class: QueuedRunCoordinator
  config:
    max_concurrent_runs: 20
    tag_concurrency_limits:
      - key: "dagster/partition_set"
        value:
          applyLimitPerUniqueValue: true
        limit: 1
<<< I tried something like this, and it seemed working as expected (preventing from having multiple runs of the same job). However, if your next run also takes more than 1 hour, then your runs will queue up, I guess.
d
One way to avoid the runs queueing up would be to get a bit fancier in your schedule function. Here's some pseudocode:
Copy code
from dagster import schedule, DagsterRunStatus, RunsFilter, RunRequest, SkipReason

@schedule(
    ...
)
def my_schedule(context):
    runs = context.instance.get_runs(
        filters=RunsFilter(
            job_name="your_job_name",
            statuses=[DagsterRunStatus.STARTED]
        )
    )
    if runs:
        return SkipReason("Run for this job still happening")

    return [
        RunRequest(
            run_key=None,
            run_config={}
        )
    ]
❤️ 1
a
Hi @daniel What would be the best option for having such setting for some of the jobs only? I.e. for some jobs I want to have concurrency = 1, the others = 10? I don't want to skip the runs, just queue them up. Thanks