Hey there, we have a Dagster cloud instance and we...
# ask-community
m
Hey there, we have a Dagster cloud instance and we run into issues with jobs getting killed because too many jobs are launching at the same time. I’ve taken a look at this thread: https://dagster.slack.com/archives/C014N0PK37E/p1680531104013389 but I was wondering if there is any other way to have Dagster automatically scale back the number of jobs that are launched at the same time? We already have concurrent run limits set and while the cluster scales fine to handle say 40 jobs running at the same time, it needs time to ramp up. So this is a 2 part question 1. Is there a way to automatically limit the number of jobs that can initiate at a time (not concurrently running) 2. Is there a recommended retry pattern for this situation? Thanks! Happy Friday by the way!