I've checked the docs and tutorials and I'm just n...
# ask-community
I've checked the docs and tutorials and I'm just not getting how these are supposed to work together
😢 1
Sensors are basically just a schedule that checks some external state prior to launching a job. If you want to check that the source files exist prior to kicking off the job (potentially not kicking off the job if the file doesn't exist), use a sensor. If you just want your job to run at some specific time each day (maybe the end of the day) and process any files found at that time, you can use a schedule. If you want to process the source files ASAP after they're dropped, you could have a sensor which executes every couple minutes during those timeframes you described. But you could also do the same thing with a schedule, the difference is just that with the sensor you have an opportunity to opt out of launching an actual Dagster job if the files aren't present yet or have already been processed (sensor evaluations are executed on the user code process, prior to launching a new process for the job)
gotcha. that's helpful. sounds like i'd use a sensor running on a cron during the given timeframe. kick off the job if the file is present and just try again later (per cron expression) if it isn't. would this setup cause me to constantly start the same job once the file does land? is this where a freshness policy comes in?
well, i guess the sensor could just check if it was processed and not issue a run request
thank you
🎉 1
you won't have to worry about checking it in the sensor yourself necessarily. if you use a run_key dagster will handle that for you, and if there's an ordering to the files then you can use a cursor to start where the last sensor run left off - https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#idempotence-and-cursors.
❤️ 1
thank you, this is great