How do people prevent sensors from spiraling out o...
# ask-community
d
How do people prevent sensors from spiraling out of control and causing too many of same job to be running at the same time? Is there good way to set tags to prevent that?
a
Haven't used this with a sensor, however, we had a schedule get out of control. From the Dagster instance,
context.instance.get_run_records(RunsFilter(job_name="...", statuses=[...]))
Then if len run records == 0, we return a run request from the schedule. This might work a little different for the sensor, but I'm sure you could do something similar
j
hey @Daniel Mosesson you should be able to use the
run_key
parameter on
RunRequest
to ensure that two
RunRequests
with the same key only result in one run. https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#idempotence-and-cursors
d
But the issues is that I really want 10K runs, but I don't want more than 100 of them, to be running at any given time
j
d
for dagster in k8s open source, will this require a dagit restart?
j
yeah since you’d be making changes to
dagster.yaml
i believe you’ll need to restart dagster + the UI to see changes
d
Just for a update, I ended up going the "query the instance" route just because of the restart issue
No changes were needed to that code