Hi team! I want to thank you very much for the imp...
# ask-community
m
Hi team! I want to thank you very much for the implementation of
includeConfigInLaunchedRuns
! But that doesn't work for me. I believe that's because my deployments live in separate k8s namespace. Is that the reason?
j
Could you describe what doesn’t work for you? The config just doesn’t appear?
m
@johann I use configuration like:
Copy code
resources:
  my_resource:
    config:
      my_param:
        env: MY_ENV_VAR
Where
my_param
is defined as
StringSource
In helm chart (I use separate dagster/dagster-user-deployments) I have configuration like:
Copy code
deployments:
  - name: "my_deployments"
    envConfigMaps:
      - name: my-env-configmap
    includeConfigInLaunchedRuns:
      enabled: true
which provide
MY_ENV_VAR
(it is actually provided to deployments pod, i've checked from inside it). But when I run my job from Dagit launchpad I see the error like this:
Copy code
dagster.config.errors.PostProcessingError: You have attempted to fetch the environment variable "MY_ENV_VAR" which is not set. In order for this execution to succeed it must be set in this environment.
d
@Mykola Palamarchuk is your expectation that the runs will launch in the same namespace as the user deployment (so that it will have access to the same configmap?) We should be able to support that
m
@daniel, I think yes. That may solve the whole problem and apply another level of isolation. I was trying to use
jobNamespace
parameter of
k8sRunLauncher
, but that does not help if you have more than one deployments in different namespaces (which is the target for us).
@daniel sorry for bothering you, but do I have to create a feature request for that or something?
d
working on it right now actually
just figuring out some tests, but should be ready shortly
https://github.com/dagster-io/dagster/pull/7597 is the PR for that - not positive it'll get in for this week's release, but still very possible
m
@daniel, let me please clarify what I'm trying to achieve: At our organization we have some teams that may want to create their own DAGs and run them on a common Dagster setup. Each team may have access to its own secure resources. So I'd like each team to have their own k8s namespace in cluster, where they keep their own secrets/configmaps, keep dagster deployments and run jobs. Dagster also should run itself in its own k8s namespace. For the moment you can specify only one
jobNamespace
in the
k8sRunLauncher
configuration, so multi-namespace deployments scenario is not possible. Probably there should be an additional option like
runJobsInDeploymentNamespace
to make it compatible.
d
I think the runJobsInDeploymentNamespace option you mentioned makes sense as the default behavior if you have includeConfigInLaunchedRuns set The PR I listed basically does that (but doesn't provide any additional support for running each user code deployment in its own namespace - it's a reasonable request, just not something that PR handles)
OK, that just landed - so starting with the release tomorrow, setting includeConfigInLaunchedRuns will apply to the namespace and service account as well