Hey Dagster team, I'm running into an issue with h...
# ask-community
m
Hey Dagster team, I'm running into an issue with having dependencies across 2 jobs that run on 2 different schedules. Job A runs hourly, and Job B, which has dependencies on assets in Job A, runs on a daily schedule. The dependencies are respected when doing manual runs with all downstream assets, but when something in a scheduled run of Job A fails, the downstream assets in the scheduled Job B still run. Is this expected? The recommended approach seems to be to use asset sensors as mentioned here: https://github.com/dagster-io/dagster/discussions/8484 Is there a way to combine sensors with a schedule? i.e. skip assets in Job B if their upstream dependency in Job A failed to materialize. Or another approach I should consider? I've also looked into auto-materialization which seems like a better approach to adopt later, but I'm looking for something I can do to "glue" the 2 schedules together better in the interim.
c
Hi Marcin. Auto-materialize could be a good fit. You could define freshness policies on your assets, so that the A assets would try to materialize once an hour and the B assets would try to materialize once a day.
There isn't a good way of combining both the utilities of a schedule and a sensor. Brainstorming some options: • You could continue to have job A run on an hourly schedule. You could use an asset sensor for B, adding a run key for the current day so that no more than one run is requested per day from B. • You could continue to have an hourly schedule for A and a daily schedule for B. B's schedule could be a custom schedule that queries the event logs to check if A has been run successfully for the current day