Is there a recommended way to speed up the submiss...
# dagster-feedback
a
Is there a recommended way to speed up the submission of large backfills to the
QueuedRunCoordinator
? Here’s some context: • Self hosting Dagster and on 1.3.11. Using the
K8sRunLauncher
+ Multiprocess Executor with the
QueuedRunCoordinator
• We’re submitting multiple large backfills (in the 1000s of runs) and have noticed it takes quite awhile for jobs to be enqueued. It appears that backfills use the same Queue, and if you submit sequential backfills all jobs in earlier backfills must be enqueued before later backfills can be enqueued. This is somewhat problematic if we want to start different backfills in parallel. • Here’s our config for `QueuedRunCoordinator`:
Copy code
class: QueuedRunCoordinator
  config:
    dequeue_interval_seconds: 5
    dequeue_num_workers: 4
    dequeue_use_threads: true
    max_concurrent_runs: 100
We aren’t really ever hitting
max_concurrent_runs
; the issue is more about enqueue speed. Should we look into implementing our own custom
RunCoordinator
?
Also fixing broken hyperlink in these docs 🎅 https://github.com/dagster-io/dagster/pull/15006
d
Hey Alex - this is an old post but since it came up briefly in the doc you sent us: What I think we would need to resolve this is to enqueue runs in parallel in the backfill daemon (akin to the solution here that we used to speed up sensor ticks that submit many runs in parallel: https://github.com/dagster-io/dagster/pull/8642)
very reasonable feature request if you wouldn't mind filing an issue
a
🎅 will do!