We have jobs that run on a schedule every couple o...
# ask-community
We have jobs that run on a schedule every couple of hours. And when they fail they are setup to send a notification to on call people. The issue is fixing the problem might take more than a couple of hours, especially if a failure happens in the middle of the night. This can result in a piling up of notifications that are redundant and noisy. Airflow has
so you can prevent a job schedule from executing if the last run of the same job failed. Is there an analog to this for Dagster? I'm seeing execution_fn and should_execute that leverage
but that context seems to only have information about the current run. Any best practice suggestions for this situation?
hi @John Cenzano-Fong! I think you're on the right track with the
bit. The
has an
property, which gives you access to the
(which hold information on past runs and all that good stuff). So within the body of your should_execute function, you can query for the last run of your job with something like
Copy code
last_run = list(context.instance.get_runs(filters=RunsFilter(job_name="my_job"), limit=1))[0]
if last_run.status == DagsterRunStatus.FAILURE:
(you can do
from dagster import RunsFilter, DagsterRunStatus
Awesome, thanks @owen!