In my setup we have one code location that we want to be configurable based on a set of environment variables. With that, we deploy multiple versions of the same code location at the same time, however, with a different set of configurations.
We want this behavior to isolate our projects. They all have the same assets and jobs, but it would be too much information to be displayed in Dagit if we had all of the projects partitions in one place. A potential solution to this problem is to have a nested partition mechanism, but this is beyond the scope of my question.
Given that context, I'm having a hard time passing these environment variables from my code location to the pods in which the runs actually happen. It seems that the forwarding mechanism triggered when you define
env_vars
in the
run_launcher
config map expects the environment variables to be set in the daemon pod and not in the code location pod, which makes sense since the architecture assures there is only one daemon and I believe that's where the job running pods are spawned from.
Anyway, I'm wondering if there is a way to solve my issue or if I'm going to have to model our use case differently 🤔