Hi Dagster Team! :dagster-yay: I am facing a bloc...
# ask-community
s
Hi Dagster Team! dagster yay I am facing a blocker in dagster-k8 deployment I am currently trying to mount a python file using volumes/volume mount into a user code deployment pod via helm charts in my values.yaml file (
runLauncher
,
deployments
) I am doing this so dagster refers to python file in the volume, giving me flexibility to add more files for new user code deployments without docker build (directly uploading files into volumes and pointing the file in the pod by updating workspace.yaml file in volume as well) But I am getting this error (Looks like dagster grpc is not able to locate the file, while I can see the file mounted in the user-code pod)
Copy code
FileNotFoundError: [Errno 2] No such file or directory: '/example_project/example_repo/repo-2.py'
  File "/usr/local/lib/python3.7/site-packages/dagster/grpc/impl.py", line 81, in core_execute_run
    recon_pipeline.get_definition()
  File "/usr/local/lib/python3.7/site-packages/dagster/core/definitions/reconstruct.py", line 172, in get_definition
    defn = self.repository.get_definition().get_pipeline(self.pipeline_name)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/definitions/reconstruct.py", line 81, in get_definition
    return repository_def_from_pointer(self.pointer)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/definitions/reconstruct.py", line 648, in repository_def_from_pointer
    target = def_from_pointer(pointer)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/definitions/reconstruct.py", line 569, in def_from_pointer
    target = pointer.load_target()
  File "/usr/local/lib/python3.7/site-packages/dagster/core/code_pointer.py", line 176, in load_target
    module = load_python_file(self.python_file, self.working_directory)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/code_pointer.py", line 75, in load_python_file
    os.stat(python_file)
Happy to share values.yaml file and other details on DM! thankyou
🤖 1
@dagster team could you have at this😅
d
Hi Saurav - could you share what your deployments dictionary looks like and what dagster version you're using? There's an example of how you can set volumes and volume mounts in the grpc server here: https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#configure-your-user-deployment
s
Hi Daniel I am using dagster
0.14.15
This is my deployment dict
Copy code
dagster-user-deployments:
  deployments:
  - dagsterApiGrpcArgs:
    - --python-file
    - /example_project/example_repo/repo-2.py
    env:
      PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION: python
    envSecrets:
    - name: dagster-aws-access-key-id
    - name: dagster-aws-secret-access-key
    image:
      pullPolicy: Always
      repository: my-image-name
      tag: latest
    includeConfigInLaunchedRuns:
      enabled: true
    name: k8s-example-user-code-2
    port: 3030
    resources:
      limits:
        cpu: 250m
        memory: 500Mi
      requests:
        cpu: 250m
        memory: 500Mi
    volumeMounts:
    - mountPath: /example_project
      name: my-volume-dagster
      readOnly: false
    volumes:
    - name: my-volume-dagster
      persistentVolumeClaim:
        claimName: my-nfs
  enabled: true
pipelineRun:
  image:
    repository: my-image-name
    tag: latest
    pullPolicy: Always
  env:
    DAGSTER_K8S_PIPELINE_RUN_IMAGE: my-image-name
runLauncher:
  type: K8sRunLauncher
  config:
    image:
      repository: my-image-name
      tag: latest
      pullPolicy: Always
    envVars:
    - "DAGSTER_K8S_PIPELINE_RUN_IMAGE"
    k8sRunLauncher:
      envSecrets:
      - name: dagster-aws-access-key-id
      - name: dagster-aws-secret-access-key
    volumeMounts:
      - mountPath: /example_project
        name: my-volume-dagster
        readOnly: false
    volumes:
    - name: my-volume-dagster
      persistentVolumeClaim:
        claimName: my-nfs
d
If you
kubectl describe
the pod, are the volumes and volume mounts what you would expect?
s
For dagster-user-deployment pods I expect /example_project to be present But my end goal is that the run-worker pods be having the repo.py files I’d be uploading before deploying the user-code via helm upgrade
d
Do the dagster-user-deployment pods have the volumes mounted that you'd expect from the config in your dagster-user-deployment config above?
s
Yes
d
Ok - so it sounds like this is more of a k8s question then? As to why even though dagster is configuring the volumes and volume mounts the way you would expect, the file in the volume doesn't seem to be available to load within python?
s
Yes, seems like it Correct me here, the file which python loads is inside the run-worker pods, and it is not able to find the file in the run-worker pod which is why the error even tho the file is present in user-code-deployment pod
d
It needs to be able to load it in both the run worker pod and the user code deployment pod
which pod is producing the error that you pasted above?
s
I saw this error in dagit UI But usercode pod logs showed no error
d
Where did you see it in the dagit UI?
s
In the logs, upon launching a run
d
what run launcher are you using?
s
K8sRunLauncher
d
looks like the k8srunlauncher
s
Yes
d
Ok, so this is in the run worker pod then, not the user code deploymet pod
If you 'kubectl describe' that pod does it have the volumes mounted like the user-code-deployment pod? and is it using the same image?
s
It is using the same image but not having the volume
d
are you volumes and volumeMounts indented correctly? I see
Copy code
k8sRunLauncher:
      envSecrets:
      - name: dagster-aws-access-key-id
      - name: dagster-aws-secret-access-key
    volumeMounts:
      - mountPath: /example_project
        name: my-volume-dagster
        readOnly: false
    volumes:
    - name: my-volume-dagster
      persistentVolumeClaim:
        claimName: my-nfs
in your post, do you want
Copy code
k8sRunLauncher:
      envSecrets:
      - name: dagster-aws-access-key-id
      - name: dagster-aws-secret-access-key
      volumeMounts:
        - mountPath: /example_project
          name: my-volume-dagster
          readOnly: false
      volumes:
      - name: my-volume-dagster
        persistentVolumeClaim:
          claimName: my-nfs
volumeMounts should be a child of k8sRunLauncher
you can also set includeConfigInLaunchedRuns on the user code deployment to automatically include the volumes in the launched run without needing to set the volumes in two places: https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#configure-your-user-deployment
Copy code
includeConfigInLaunchedRuns:
        enabled: true
s
I think it was a typo while sending here, I included the indent while helm upgrade
d
maybe worth double checking that it's actually being included? it would be consistnet with what you're seeing if the volumes aren't there on the run worker pod
s
Yes, it is working! It was indeed an indentation mistake, thanks! Will let you know if it breaks at anypoint
condagster 1