I have a job that changes a bunch of tables and is...
# ask-community
b
I have a job that changes a bunch of tables and is triggered by a schedule. What is the "dagstier" way of avoiding redudant runs? I.e., to avoid that job to run twice in a day if there is some problem (some long queues that make such a delay that one run jumps into the next day, for example)? Thanks!
j
Hmm, in most cases the runs are able to use the partition key to only work on the subset of data intended. It sounds like this job always runs on all data?
I don’t think we have first class support for this, but it could be achieved by querying graphql for runs and aborting the run if there’s another before it on the same day. Or doing something similar inside a sensor, so you can avoid launching the run at all
b
Hi @johann! Thanks for your answer 😁
In this case, I have a job running with a schedule that is supposed to look at 'status' field and process everything still 'pending' on a daily basis. Therefore, I do not want to look at specific date partition, as 'pending' rows might have several dates
Querying graphql sounds unidiomatic but effective 😅 . Is there any documentation on how to do it?