i also don’t want to set `max_concurrent_runs` to ...
# ask-community
j
i also don’t want to set
max_concurrent_runs
to 1, because I want to parallelize execution by default and only limit concurrency in this specific part of the pipeline
d
I think you can do this using the
tag_concurrency_limits
config value in the
run_coordinator
in
dagster.yaml
. https://docs.dagster.io/deployment/run-coordinator
👍 1
I haven't tried this personally, but I believe you add the tag to the ops that you want to limit the concurrency on in the ops definition
d
Right now we offer run-level concurrency and op-level concurrency within a single run - we don't yet offer global op-level concurrency without setting up something like Celery, but we're actively discussing it here: https://github.com/dagster-io/dagster/issues/12470
👍 2
j
@Danny Steffy I think this is actually exactly what I’m looking for! I’ve tested it with custom tags inserted at runtime and it’s behaving as I’d like. Now i’m having trouble ensuring that an asset job has specific tags…
d
Jack I just want to make sure you’re aware though that what Danny posted is still purely concurrency at the run level - you mentioned “only limit concurrency in this specific part of the pipeline” - we do not currently offer a feature that does that across multiple runs
j
thanks for that daniel - as long as I can tag specific jobs as “limited to 1 run at a time” and tag other jobs to be “go buck wild” I think that’s good enough for my purposes. Do you have any docs/tips on how to ensure that a specific asset job is given a run tag by default?
👍 1
d
You should be able to supply a dictionary of tag key value pairs when creating the job
Via the “tags” argument
j
hm, ok, that’s what i thought, but it’s not working. let me debug
yeah - i’ve passed a
tags
parameter to the
define_asset_job
function, but I don’t see any such tags in the UI anywhere. when I kick off a backfill or a run, I don’t see the custom tag
i’ve looked around for anything else I could be missing, and I have no idea what it could be
in my
repository
i’ve defined both the asset and the job that sits on top of the asset from the
define_asset_job
call
part of my suspicion here is that the spec for
define_asset_job
’s
tag
parameter doesn’t actually take a dictionary straight up, but rather an object called a “Mapping”
d
mapping is just a read-only dict. If you could pass along code that reproduces the problem we can take a look
But i'm pretty confident that the tags parameter to define_asset_job will affect runs launched using that job
j
ok lemme try to make a thing to reproduce, thanks for your help!
wanted to make sure i wasn’t missing anything