I'm trying to tweak the behaviour of my dagster OS...
# deployment-kubernetes
a
I'm trying to tweak the behaviour of my dagster OSS deployment - I opted for using a k8s/celery combination as I want to be able to control the resource usage, especially when doing backfills on quite heavily partitioned data. I had thought this would be easy with the celery/k8s combo as the requests to do work could just sit in the rabbitmq queue until it was their time. However I'm finding that no matter what I do it just runs through all of the tasks in the queue and schedules them. Is there an easy way to configure a rate limit/ tell celery to not schedule more than a certain number of jobs at a time?
a
Celery/rabbitmq is not need anymore for concurrency -> https://docs.dagster.io/1.3.12/guides/limiting-concurrency-in-data-pipelines#limiting-opasset-concurrency-across-runs FYI This is still an experimental feature
a
OK - and is this the recommended way of handling concurrency now?
j
The particular link is to a slightly older version of the docs, but yes.
a
Can I recommend then that you indicate that on this page? https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm-advanced The title and overview suggest that celery is the way to achieve this. Or am I missing something?
☝️ 1