Danny
07/14/2020, 5:10 PMmax_concurrent_runs
, which afaik has been disabled and is not currently working.
2. Using an external semaphore mechanism and requiring the solid to acquire it at start time.
3. Implementing a custom poll+wait function against DagsterInstance.get_runs()
for local or pipelineRunsOrError
for graphql to not launch a pipeline run until enough slots are available (downside is this only works when launching via code, but can't catch web UI pipeline launches, so a poor solution).
4. Do some config magic in celery, which we use as the executor. Not sure what this config magic is though, a pointer to some docs/examples would be awesome.
Are there any other strategies available at the moment for this?max
07/14/2020, 5:12 PMalex
07/14/2020, 5:25 PMconfig magic in celery
dagster-celery worker start
takes a -q
option to specify a queue name. You can add a tag to solids of "dagster-celery/queue": "queue_name"
to specify which queue the work gets submitted to.
docs definitely need to be better here but some pieces are mentioned https://docs.dagster.io/_apidocs/libraries/dagster_celery
examples/tests: https://github.com/dagster-io/dagster/blob/master/python_modules/libraries/dagster-celery/dagster_celery_tests/test_queues.pyDanny
07/14/2020, 6:24 PMmax_concurrent_runs
?alex
07/14/2020, 6:36 PMmax_concurrent_runs
, likely scoped to the default run launcher implementation.Danny
07/14/2020, 6:44 PMalex
07/14/2020, 6:57 PM