Hi all! Got another sensor related question for yo...
# announcements
Hi all! Got another sensor related question for you. I'm in a situation where I need to send more than 10 run requests to a pipeline. On version 0.10.9 and my question is a two-parter 1. Is there any way to increase this 10 run request limit? 2. How does the QueuedRunCooridinatorDaemon detect that the run request limit has been reached? Ideally, I would not want the sensor to keep piling on tasks for the pipeline and DoS the daemon to death. I'd like to incorporate such functionality in my sensor where if the run limit for a pipeline has been reached, the sensor will not populate the queue
Hi Alex - you're allowed to send more than 10 run requests at once actually. Are the docs implying somewhere that there's a limit like that?
Thanks for the reply! I actually mistyped. It's not the SensorDaemon that's detecting the runs, it's actually the QueuedRunCooridinatorDaemon. I'll make sure that gets reflected in the original message
With regards to the limit, I'm getting a log statement from said daemon stating:
10 runs are currently in progress. Maximum is 10, won't launch more.
Ah, got it - for the first part, you can increase the number of runs that can run at once using the config options described here:
(And you can remove the limit altogether by removing QueuedRunCoordinator from your dagster.yaml)
and just to double check we're talking about the same thing, the limit there is the number of runs that are allowed to be executing at once, not a limit that lasts forever
i.e. the limit is intended for the use case you describe in the 2nd part of your question
Thanks so much for your replies Daniel! I probably won't be disabling the coordinator right out the gate because it will probably overwhelm the machine. We can go as high as 2800 concurrent runs in that case. So long as it's not a problem, then it's alright with me. Thanks for the documentation link!
condagster 1
One last question Daniel: is there an API endpoint that I can call with
dagster api
that can check how many runs for a pipeline are in the queue?
you could do that using the graphql api:
Copy code
query QueueCountQuery {
  pipelineRunsOrError(filter:{pipelineName:"pipeline_name_here", statuses:[QUEUED]}) {
    ... on PipelineRuns{