Hello. We are running dagster in k8s with celery. ...
# deployment-kubernetes
r
Hello. We are running dagster in k8s with celery. For one code location Definition, I’d like to define a default configured
celery_k8s_job_executor
. A Job that is included in the same Definition does not specify its own executor or config, so I would expect it to use the Definition’s configured executor. However, we are seeing that we must specify
config={"execution":{"config":{celery_k8s_config()}}
on this job, otherwise it will not pick up the correct job_namespace. Is this expected behavior? Is there a better way to provide a default configured executor or a default execution config for multiple jobs?
d
Hey Ricky - this is an unfortunate downside of the celery_k8s_job_executor right now, it doesn't play nicely with configured executors (it's not so much that it's the default executor on the Definitions that the problem as that it's using configured). We'll likely make some changes to this executor in future versions to be more like the k8s_job_executor, which doesn't have this restriction
r
thanks for the explanation. we’ve been banging our heads against this for a few days now.
i wonder if it’s worth putting this in the public docs? or in the description of the
celery_k8s_job_executor
?
is there a github issue we could follow?
d
thank you box 1