https://dagster.io/ logo
Title
d

Daniel Mosesson

05/17/2023, 11:20 AM
How do people prevent sensors from spiraling out of control and causing too many of same job to be running at the same time? Is there good way to set tags to prevent that?
a

Aaron T

05/17/2023, 1:07 PM
Haven't used this with a sensor, however, we had a schedule get out of control. From the Dagster instance,
context.instance.get_run_records(RunsFilter(job_name="...", statuses=[...]))
Then if len run records == 0, we return a run request from the schedule. This might work a little different for the sensor, but I'm sure you could do something similar
j

jamie

05/17/2023, 1:58 PM
hey @Daniel Mosesson you should be able to use the
run_key
parameter on
RunRequest
to ensure that two
RunRequests
with the same key only result in one run. https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#idempotence-and-cursors
d

Daniel Mosesson

05/17/2023, 2:25 PM
But the issues is that I really want 10K runs, but I don't want more than 100 of them, to be running at any given time
j

jamie

05/17/2023, 3:20 PM
d

Daniel Mosesson

05/17/2023, 3:24 PM
for dagster in k8s open source, will this require a dagit restart?
j

jamie

05/17/2023, 4:18 PM
yeah since you’d be making changes to
dagster.yaml
i believe you’ll need to restart dagster + the UI to see changes
d

Daniel Mosesson

05/17/2023, 10:11 PM
Just for a update, I ended up going the "query the instance" route just because of the restart issue
No changes were needed to that code