Hi all, I have a question on limiting concurrency ...
# ask-community
t
Hi all, I have a question on limiting concurrency in dagster 1.3.13. We (@Dane Linssen ) used to control concurrency in our project with a general configuration in our repo.py as follows:
defs = Definitions(
assets=[asset1],
jobs=[job1, job2],
schedules=[schedule1],
executor=multiprocess_executor.configured(
{
"max_concurrent": 3,
},
),
)
However, when I backfill an asset with partitions, I see lots of jobs starting/running at the same time - many more than the concurrency limit. Can anyone help explain this? Looking at the documentation it seems concurrency is now commonly limited at the asset or tag-level (see the docs). Does the above way of limiting concurrency not work for partition backfills (anymore)?
o
hi @Tim Weelinck ! the above configuration is limiting the concurrency of ops within a single run (and so will have no effect in terms of cross-run concurrency). You'll want to t go with the method of limiting concurrency described in this section, in which you edit your dagster.yaml file to set the
max_concurrent_runs
key under
run_queue