https://dagster.io/ logo
#deployment-kubernetes
Title
# deployment-kubernetes
j

Jaewoo Park

08/22/2023, 3:20 PM
Hi all, small question. I have a helm deployed dagster, and have just the default value for dagsterHome (https://github.com/dagster-io/dagster/blob/master/helm/dagster/values.yaml), and I noticed the default dagsterHome value
/opt/dagster/dagster_home
doesnt actually hold anything. Does helm deployed dagster generate a dagster.yaml? If not if we wanted to get the DagsterInstance how would you go about doing so?
q

Quentin Gaborit

08/22/2023, 3:30 PM
By doesn't hold anything you mean that the directory is empty when you ssh in your pods? To answer you first question I think the
dagster.yml
is generated by the Helm chart, the templates for object of the files being located at https://github.com/dagster-io/dagster/tree/master/helm/dagster/templates/helpers/instance
So your user-deployment image's
DAGSTER_HOME
needs to match the
dagsterHome:
defined in the Helm chart global
j

Jaewoo Park

08/22/2023, 3:32 PM
Yeah empty when ssh-ed and DagsterInstance.get() returns a
dagster.yaml
not found error. Ah yeah it does seem like it templates a dagster.yaml hmm
b

ben

08/22/2023, 4:39 PM
Hi Jaewoo, yes, the
dagster.yaml
file is supposed to be created from a config mapping which is mounted as a volume in the deployment. Are you able to see this config mapping in your cluster?
Which pod are you exec-ing into in this case
j

Jaewoo Park

08/22/2023, 4:50 PM
Hey @ben thanks for the followup 🙂 I was looking in the user-deployment pods. I see it in dagster-webserver. I also see the cm dagster-instance. Is it also mounted into user-deployments? For more context, I was wondering if I could call
execute_job
from within another job.
b

ben

08/22/2023, 4:58 PM
You should be able to access the instance from within the body of a job’s op through
context.instance
(rather than
DagsterInstance.get()
). From there you could call
execute_job
, however this is going to try to execute the job synchronously in the op. You may instead want to queue/submit the job so it runs through normal channels (e.g. in a separate pod). You can do this using the GraphQL API or by attaching a sensor: https://github.com/dagster-io/dagster/discussions/7160#discussioncomment-2473920
j

Jaewoo Park

08/22/2023, 5:54 PM
thanks for the reference! I’ll take a look!
3 Views