Hello, I’m trying to evaluate different Dagster RunLaunchers. I’m in a situation where I’d like to have a
single Dagster daemon schedule jobs to run as dynamically provisioned K8S Jobs (multiprocess_executor)
in one of SEVERAL k8s clusters. The reason for this odd requirement is that I’m in a situation which necessitates very strong data-residency restrictions. I would need a general purpose task orchestrator like Dagster to be able to conditionally schedule jobs to run in one
regional unit of compute or another (here abstracted as independent regional k8s clusters), based on parameters of the job for instance. Obviously we could solve this with a CeleryRunLauncher since Celery supports the concept task queues for its workers, but I’d rather opt for a more dynamic model of execution, if possible. Since the job workload between different
regions is not uniform, I’d rather not have independent dagit/daemon/user-code deployments running in each regional k8s cluster.
Has anyone pursued a multi-k8s execution strategy like this before and can provide some guidance here? I looked into extending the existing
K8SRunLauncher
to support a map of instantiated kubernetes clients, instead of a single one, but I’m not sure if I’m going down the right path here…