https://dagster.io/ logo
#deployment-kubernetes
Title
# deployment-kubernetes
c

Christopher Lee

07/05/2022, 4:41 PM
Hello, I’m trying to evaluate different Dagster RunLaunchers. I’m in a situation where I’d like to have a single Dagster daemon schedule jobs to run as dynamically provisioned K8S Jobs (multiprocess_executor) in one of SEVERAL k8s clusters. The reason for this odd requirement is that I’m in a situation which necessitates very strong data-residency restrictions. I would need a general purpose task orchestrator like Dagster to be able to conditionally schedule jobs to run in one regional unit of compute or another (here abstracted as independent regional k8s clusters), based on parameters of the job for instance. Obviously we could solve this with a CeleryRunLauncher since Celery supports the concept task queues for its workers, but I’d rather opt for a more dynamic model of execution, if possible. Since the job workload between different regions is not uniform, I’d rather not have independent dagit/daemon/user-code deployments running in each regional k8s cluster. Has anyone pursued a multi-k8s execution strategy like this before and can provide some guidance here? I looked into extending the existing
K8SRunLauncher
to support a map of instantiated kubernetes clients, instead of a single one, but I’m not sure if I’m going down the right path here…
p

prha

07/05/2022, 8:15 PM
Hi Christopher. I think a customized run launcher could be a really good solution here… Deciding where and how a run should execute is exactly what the run launcher abstraction was designed for.
c

Christopher Lee

07/08/2022, 2:10 PM
Thanks Prha, I’ve implemented this as a CustomRunLauncher and it seems to work pretty well. Would you accept a PR on this to generalize the existing K8SRunLauncher?