Presently we're using the in-process Dagster daemon to run at most a single job at a time. Run requests are triggered by sensors. If one is currently running, Dagster queues all incoming run requests until the running job is finished.
Our job definition is basically SSH into a remote server, run some stuff there, and return.
I'd like to know how to think about expanding the above setup into a "task queue" where Dagster will run as many jobs simultaneously as we have servers in the pool, but run at most one job per server at a time.
Are there ways Dagster's framework can support this? Or do I need to implement something that polls Dagster's queued/running jobs and somehow tells Dagster to "release"/"start" a given job w/ a config of which server to run it on?