I'm reading through the <Dynamic Graphs> docs โ€“ se...
# ask-community
I'm reading through the Dynamic Graphs docs โ€“ seems like a super neat approach to mapping over work ๐Ÿ‘. One question: does Dagster support limiting the amount of in-flight work? E.g. suppose I have 1MM documents but only want to process 100 at a time (perhaps due to infrastructure constraints) โ€“ can I do that? For context, I'm planning on running Hybrid Dagster Cloud, because some of our jobs will require GPU access.
dagster bot responded by community 1
Indirect ways to limiting concurrency might work โ€“ e.g. setting a maximum node group size, upon which a particular job will run.
Perhaps I should be thinking about achieving this with Dask? I just saw the caveat about not capturing logs โ€“ I guess I could shuttle the logs out to our usual aggregator at the k8s levelโ€ฆ
Would love an indication of whether I am getting off the beaten track when it comes to running Dagster on Dask. I have had deliciously painful experiences trying to get other workflow tools running reliably on it ๐Ÿ™ƒ
I don't think you need Dask to do that, I've always just limited op concurrency in dynamic graphs by configuring the default executor
๐Ÿ‘ 1
Ah, sorry โ€“ I was looking at Dask because of the need to run on GPUs
TBH I've had enough Dask headaches that I might just run that workload on separate pods and have my Dagster job hit their API