So I've set the `jobNamespace:` parameter in the c...
# ask-community
b
So I've set the
jobNamespace:
parameter in the config section for my K8sRunLauncher but jobs are still being launched in the release namespace - is there something else I need to do?
So best I can tell, this jobNamespace param doesn't seem to work. Have tested with the multiprocess and k8s executors both launched by the K8sRunLauncher and jobs/steps are all still spawned in the release namespace
c
Would you mind filing an issue about this? Possible it slipped through our testing infra
b
Yeah absolutely - thanks for checking for me
d
Hi Ben - realize this is quite old, but my guess as what's happening here is that the default namespace of the K8sRunLauncher is getting overridden by the namespace of your user code deployment, since by default we pass that through to each run launched from that user code deployment: https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#configure-your-user-deployment You could test that by setting includeConfigInLaunchedRuns to False for your user code deployments (this all assumes you are using the helm chart)
b
All good, Daniel - still on my list but just haven't gotten around to debugging further. This makes sense but if I set
includeConfigInLaunchedRuns
to false will I not lose access to env vars/secrets etc from the user code charts?
d
Yeah, that's correct (those secrets would need to be available in the other namespace too though if you wanted the run to happen in that different namespace)
We could potentially make it more granular which fields on the user code deployment get passed along (instead of all or nothing always)
b
ok that's not too much of a problem - I can recreate them in the other namespace easily enough
Yeah I mean that's probably the ideal situation: being able to control which get passed through and which don't
d
What's the underlying motivation for having the runs happen in a different namespace?
b
I want to use ArgoCD to manage CD for my user code deployments but every new job/step pod that's spawned causes argo to think it's out of sync as those pods aren't explicitly declared in the manifests/charts
d
Makes sense!
b
(Possibly there's a way to exclude them from argo's consideration but I haven't managed to get it working yet)
d
I vaguely remember a label that you can add for that, let me see if I can find it
🙌 1
maybe this? https://argo-cd.readthedocs.io/en/stable/user-guide/compare-options/ unfortunately the only way to set annotations right now is to set a tag on each job that you want the annotations to apply to, so that's not a perfect solution either
b
hm, ok makes sense - thanks for having a dig for me anyway. Will try a few options out