https://dagster.io/ logo
#ask-community
Title
# ask-community
s

Suraj Narwade

08/17/2021, 2:55 PM
Hi all 👋, I am trying out a following approach for installing dagster, I have two namespaces: - dagster-infra - dagster-user I installed dagster daemon & dagster in the 
dagster-infra
 namespace and I installed user deployment & configured job namespace in the 
dagster-user
 namespace. Along with this, I populated Postgresql credentials secret in both the namespace as required. my pipeline example reads environment variable which comes from the secret. for example name of k8s secret my pipeline example needs is 
topsecret
 which is present in the 
dagster-user
 namespace & also updated in dagster-instance configmap as 
env_secrets
 in the dagster-user namespace itself. now with this setup ideally my job pod should get the secret injected but it is not. though it works when I update the dagster-instance configmap from the 
dagster-infra
 namespace which is consumed by daemon with the same 
env_secrets
after doing that I've noticed these secrets gets appended into user deployments as well as jobs. based on this, I've the following questions: • Why does the daemon need to know about secrets? • Why does user deployment gets the secrets injected? • is there any other way to tell daemon that my pipeline needs the given secret?
d

daniel

08/17/2021, 3:07 PM
Hey Suraj - I'm not positive I follow every detail here (are you saying you manually updated the dagster-instance configmap with kubectl?), but to get secrets in the job pod for launched runs you would set this config in your values.yaml
Copy code
runLauncher:
  config:
    k8sRunLauncher:
      envSecrets:
        - name: your-secret-name-here
If you set that config, are you still not seeing your secret available in the pod for the launched run?
s

Suraj Narwade

08/17/2021, 3:11 PM
Hi Daniel, thank you for your response. I am not using helm to deploy the dagster, I'm using raw manifests. my question moreover is why daemon needs to know about
your-secret-name-here
name of the envSecrets that job pod needs
if daemon injects those envSecrets into job pods then my question would be why it is injected in the kube-deployment for the user ?
d

daniel

08/17/2021, 3:12 PM
Got it. The only reason I think it would need to know that it is so that it can correctly configure the pods that it launches
Similarly since the user code deployment loads your pipeline code, if the secret is needed to load your pipeline then including it there would also make sense to me
s

Suraj Narwade

08/17/2021, 3:14 PM
what do you mean by loads the pipeline 🤔 ? correct me if I am wrong, my understanding is user code deployment is only used to notify the status of the job pod to dagit
bit confused here about the need of secret in user code deployment
what I am trying to achieve here is, separate out dagit & daemon in one namespace & user code deployment in other namespace and keep the configurations independent, but what i can see is, because daemon needs to know this
envSecrets
I can't achieve that
d

daniel

08/17/2021, 3:16 PM
My understanding of the user code deployment is that it's responsible for loading your pipeline code, running a gRPC server, and passing metadata about your pipelines to dagit and the daemon
I don't think the daemon needs to be able to actually load the envSecrets, but it may need to know their names so that it can configure the pods that it creates
s

Suraj Narwade

08/17/2021, 3:20 PM
with that said, If I have two different pipelines which requires two different secrets let's say pipeline A needs secretA & pipeline B needs secretB, now I mention:
Copy code
envSecrets:
- secretA
- secretB
now the thing is both the pipelines gets both the secretsA & secretB whereeas it is not necessary or rather can be security issue 🤔 WDYT ?
in more depth, both user code deployments, job pods for pipeline A & B will get secretsA & secretsB
whereas it should be pipelineA gets secretA & pipelineB gets secretB
d

daniel

08/17/2021, 3:24 PM
yeah, that's something we unfortunately do not have great support for when using the k8s run launcher currently. I agree the best solution would be if that could be varied in a per-pipeline way
s

Suraj Narwade

08/17/2021, 3:28 PM
Thanks Daniel for the response, do you have any user story where they are running user code deployments and jobs in different namespaces ?
d

daniel

08/17/2021, 3:49 PM
i'm not aware of a specific user but it should be possible, since we allow you to configure the run launcher with a namespace
s

Suraj Narwade

08/18/2021, 7:49 AM
got it, thanks 🙂
d

daniel

08/18/2021, 1:11 PM
Hey, I forgot one other option here - you can also use tags on individual pipelines to configure the job that is spun up to run that pipeline, and that could potentially include your secrets: https://docs.dagster.io/deployment/guides/kubernetes/customizing-your-deployment That config needs to be set for each pipeline though
s

Suraj Narwade

08/18/2021, 1:22 PM
so that way, my daemon don't need to know the secret but job pod will get it, interesting, thanks 🙂
d

daniel

08/18/2021, 1:30 PM
I don't think with any of these the daemon needs to know the secret. You configure it with the secret name, not the secret value
s

Suraj Narwade

08/18/2021, 1:31 PM
yeah right but I want to avoid that bit as well
I think, ideally daemon should only know about pipeline and higher level configuration, secrets and those app level bits should be handled at user level itself
I can be wrong as well 😄
as you said I've added the tags for pipeline as shown below:
Copy code
@pipeline(
  tags = {
    'dagster-k8s/config': {
      'container_config': {
          'envFrom': [{ 'secretRef': 'test-user-deployment-env-secret' }],
    },
  },
  },
)
def hello_pipeline():
  hello(get_name())
when I run it, I get the error:
Copy code
TypeError: __init__() got an unexpected keyword argument 'envFrom'
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/utils.py", line 27, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 16, in launch_pipeline_execution
    return _launch_pipeline_execution(graphene_info, execution_params)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 49, in _launch_pipeline_execution
    run = do_launch(graphene_info, execution_params, is_reexecuted)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 37, in do_launch
    pipeline_run.run_id, external_pipeline=external_pipeline
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1265, in submit_run
    run, external_pipeline=external_pipeline
  File "/usr/local/lib/python3.7/site-packages/dagster/core/run_coordinator/default_run_coordinator.py", line 34, in submit_run
    return self._instance.launch_run(pipeline_run.run_id, external_pipeline)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1322, in launch_run
    self._run_launcher.launch_run(run, external_pipeline=external_pipeline)
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/launcher.py", line 254, in launch_run
    user_defined_k8s_config=user_defined_k8s_config,
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/job.py", line 471, in construct_dagster_k8s_job
    **user_defined_k8s_config.container_config,
cc @daniel
d

daniel

08/18/2021, 2:39 PM
similarly, secret_ref not secretRef
s

Suraj Narwade

08/18/2021, 2:40 PM
ah my bad, bit confused as coming from client go experience 😄
thanks 🙂
condagster 1
after changing as shown:
Copy code
@pipeline(
  tags = {
    'dagster-k8s/config': {
      'container_config': {
          'env_from': [{ 'secret_ref': 'test-user-deployment-env-secret' }],
    },
  },
  },
)
def hello_pipeline():
  hello(get_name())
now getting:
Copy code
KeyError: 'env_from'
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/utils.py", line 27, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 16, in launch_pipeline_execution
    return _launch_pipeline_execution(graphene_info, execution_params)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 49, in _launch_pipeline_execution
    run = do_launch(graphene_info, execution_params, is_reexecuted)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 37, in do_launch
    pipeline_run.run_id, external_pipeline=external_pipeline
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1265, in submit_run
    run, external_pipeline=external_pipeline
  File "/usr/local/lib/python3.7/site-packages/dagster/core/run_coordinator/default_run_coordinator.py", line 34, in submit_run
    return self._instance.launch_run(pipeline_run.run_id, external_pipeline)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1322, in launch_run
    self._run_launcher.launch_run(run, external_pipeline=external_pipeline)
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/launcher.py", line 254, in launch_run
    user_defined_k8s_config=user_defined_k8s_config,
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/job.py", line 471, in construct_dagster_k8s_job
    **user_defined_k8s_config.container_config,
d

daniel

08/18/2021, 3:05 PM
ah it's possible we have to make a small change to construct_dagster_k8s_job to support this
s

Suraj Narwade

08/18/2021, 3:11 PM
do you have any example for the same ?
d

daniel

08/18/2021, 3:13 PM
I think we may need a version of this PR (https://github.com/dagster-io/dagster/commit/f594f06fe7429e536f0da271ec350eaa8a13b55a) for the 'env_from' field
s

Suraj Narwade

08/18/2021, 3:14 PM
ah okay, I forgot to mention my dagster version is 0.11.12
d

daniel

08/18/2021, 4:18 PM
If upgrading is an option (I don't think there are significant breaking changes since 0.11.12) I think we can get a fix for this into the release tomorrow
s

Suraj Narwade

08/20/2021, 7:37 AM
perfect, I'll test out the new release
3 Views