I have a workflow that maps an op to multiple proc...
# ask-community
d
I have a workflow that maps an op to multiple processors then collects the results. These ops also kick off sub-processes to leverage the remaining cores on the machine. In order to avoid deadlocks, I tell each concurrent dagster op how many sub-processes it can spawn (based on how many main dagster ops I originally kicked off). However, some dagster ops will finish sooner than others, so I would like to re-use the cores within the longer running ops as they become available. Is there a way to communicate between concurrent dagster ops without explicitly creating a shared memory manager/namespace? (would that even work?) Does dagster have a native capability to communicate between concurrent processes?
c
Hi DK. Dagster doesn't have a native capability to communicate between concurrent processes through the
multiprocess_executor
. You will have to use something like the file system or python
multiprocessing
library to communicate between processes.