https://dagster.io/ logo
Title
m

Mohammad Nazeeruddin

11/17/2021, 2:10 PM
Hi Team. When we are executing pipelines. it's not creating k8s pod getting below error. We are not able to mount this path </home/orchestrator> in dagit pod. So there is any way to mount this path (</home/orchestrator>) in dagit pod? We are deploying with dagster-user-code-deployment.
FileNotFoundError: [Errno 2] No such file or directory: '/home/orchestrator/domains/auw/auw.py'
  File "/usr/local/lib/python3.8/site-packages/dagster/grpc/impl.py", line 75, in core_execute_run
    recon_pipeline.get_definition()
  File "/usr/local/lib/python3.8/site-packages/dagster/core/definitions/reconstructable.py", line 110, in get_definition
    defn = self.repository.get_definition().get_pipeline(self.pipeline_name)
  File "/usr/local/lib/python3.8/site-packages/dagster/core/definitions/reconstructable.py", line 46, in get_definition
    return repository_def_from_pointer(self.pointer)
  File "/usr/local/lib/python3.8/site-packages/dagster/core/definitions/reconstructable.py", line 518, in repository_def_from_pointer
    target = def_from_pointer(pointer)
  File "/usr/local/lib/python3.8/site-packages/dagster/core/definitions/reconstructable.py", line 460, in def_from_pointer
    target = pointer.load_target()
  File "/usr/local/lib/python3.8/site-packages/dagster/core/code_pointer.py", line 229, in load_target
    module = load_python_file(self.python_file, self.working_directory)
  File "/usr/local/lib/python3.8/site-packages/dagster/core/code_pointer.py", line 87, in load_python_file
    os.stat(python_file)
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagsterr
        dagster_home:
          env: DAGSTER_HOME
        job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE 
        image_pull_policy: Always
        postgres_password_secret:
          env: DAGSTER_K8S_PG_PASSWORD_SECRET
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP 
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
        volume_mounts:
          - name: dagster-pv
            mountPath: /home
            subPath: dagster.yaml
        volumes: 
          - name: dagster-pv
            configMap: 
              name: dagster-instance
We configured in configmap-instance.yaml ^, but it's not working.
j

johann

11/17/2021, 2:18 PM
What are you trying to mount to Dagit? It looks like your volume is named “dagster-instance”, but the helm chart will already be mounting the instance configuration for you
m

Mohammad Nazeeruddin

11/17/2021, 2:25 PM
To access pipelines from this location (/home/orchestrator/) to create k8s jobs :
volume_mounts:
          - name: dagster-pv
            mountPath: /home
            subPath: dagster.yaml
        volumes: 
          - name: dagster-pv
            configMap: 
              name: dagster-instance
Is it correct ?
j

johann

11/17/2021, 2:33 PM
We generally recommend deploying your pipelines in separate user code servers, rather than including them directly in the Dagit pod. https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm Is that an option?
m

Mohammad Nazeeruddin

11/17/2021, 3:06 PM
Yes, We had separate user code services those services had separate pods these pods are mounted with this path /home/orchestrator/ when we commented run_launcher it's working good. but when we are using run_launcher getting error: FileNotFoundError: [Errno 2] No such file or directory: '/home/orchestrator/domains/auw/auw.py'.
d

daniel

11/17/2021, 3:07 PM
I believe you want to mount those volumes as part of the run launcher config - you don't need them in the dagit pod, you need them in the pod that the run launcher creates
👍 1
using this config: https://github.com/dagster-io/dagster/blob/master/helm/dagster/values.yaml#L394-L413 (similar to how you set up your user code deployments i think?)
m

Mohammad Nazeeruddin

11/18/2021, 12:30 PM
you need them in the pod that the run launcher creates , > means in user code deployment pods ?
getting this error when we are using values.yaml file k8s Run launcher configs:
dagster.check.ParameterCheckError: Param "job_image" is not a str. Got None which is type <class 'NoneType'>.

  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/utils.py", line 34, in _fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 16, in launch_pipeline_execution
    return _launch_pipeline_execution(graphene_info, execution_params)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 50, in _launch_pipeline_execution
    run = do_launch(graphene_info, execution_params, is_reexecuted)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 38, in do_launch
    workspace=graphene_info.context,
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1386, in submit_run
    SubmitRunContext(run, workspace=workspace)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/run_coordinator/default_run_coordinator.py", line 32, in submit_run
    self._instance.launch_run(pipeline_run.run_id, context.workspace)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1450, in launch_run
    self._run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace))
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/launcher.py", line 281, in launch_run
    else self.get_static_job_config()
  File "/usr/local/lib/python3.7/site-packages/dagster_k8s/launcher.py", line 218, in get_static_job_config
    job_image=check.str_param(self._job_image, "job_image"),
  File "/usr/local/lib/python3.7/site-packages/dagster/check/__init__.py", line 270, in str_param
    raise _param_type_mismatch_exception(obj, str, param_name)
d

daniel

11/18/2021, 1:09 PM
not the user code deployment pods, no - the k8s run launcher creates a brand new pod for each run
👍 1
m

Mohammad Nazeeruddin

11/18/2021, 1:13 PM
####################################################################################################
# Run Launcher: Configuration for run launcher
####################################################################################################
runLauncher:
  # Type can be one of [K8sRunLauncher, CeleryK8sRunLauncher, CustomRunLauncher]
  type: K8sRunLauncher

  config:
    # This configuration will only be used if the K8sRunLauncher is selected
    k8sRunLauncher:

      # Change with caution! If you're using a fixed tag for pipeline run images, changing the
      # image pull policy to anything other than "Always" will use a cached/stale image, which is
      # almost certainly not what you want.
      imagePullPolicy: "Always"

      ## The image to use for the launched Job's Dagster container.
      ## The `pullPolicy` field is ignored. Use`imagePullPolicy` instead.
      # image:
      #   repository: ""
      #   tag: ""
      #   pullPolicy: Always

      # The K8s namespace where new jobs will be launched.
      # By default, the release namespace is used.
      jobNamespace: dagster

      # Set to true to load kubeconfig from within cluster.
      loadInclusterConfig: true

      # File to load kubeconfig from. Only set this if loadInclusterConfig is false.
      kubeconfigFile: ~

      # Additional environment variables can be retrieved and set from ConfigMaps for the Job. See:
      # <https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables>
      #
      # Example:
      #
      # envConfigMaps:
      #   - name: config-map
      envConfigMaps: 
        - name: dagster-pipeline-env

      # Additional environment variables can be retrieved and set from Secrets for the Job. See:
      # <https://kubernetes.io/docs/concepts/configuration/secret/#use-case-as-container-environment-variables>
      #
      # Example:
      #
      # envSecrets:
      #   - name: secret
      envSecrets: []

      # Additional variables from the existing environment can be passed into the Job.
      #
      # Example:
      #
      # envVars:
      #   - "ENV_VAR"
      envVars: []

      # Additional volumes that should be included in the Job's Pod. See:
      # <https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#volume-v1-core>
      #
      # Example:
      #
      # volumes:
      #   - name: my-volume
      #     configMap: my-config-map
      volumes: []

      # Additional volume mounts that should be included in the container in the Job's Pod. See:
      # See: <https://v1-18.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#volumemount-v1-core>
      #
      # Example:
      #
      # volumeMounts:
      #   - name: test-volume
      #     mountPath: /opt/dagster/test_folder
      #     subPath: test_file.yaml
      volumeMounts: []
values.yaml code for k8s ^
run_launcher:
      {{- $runLauncherType := .Values.runLauncher.type }}

      {{- if eq $runLauncherType "K8sRunLauncher" }}
        {{- include "dagsterYaml.runLauncher.k8s" . | indent 6 -}}
      
      {{- end }}
in configmap-instance.yaml ^
Getting This error > dagster.check.ParameterCheckError: Param "job_image" is not a str. Got None which is type <class 'NoneType'>.
Using 0.12.14 Version.
d

daniel

11/18/2021, 2:11 PM
what that error indicates is that your user code deployment pod isn't saying what image it should use. The way that our helm chart ensures that that is set is by setting this environment variable on each deployment: https://sourcegraph.com/github.com/dagster-io/dagster/-/blob/helm/dagster/charts/dagster-user-deployments/templates/deployment-user.yaml?L46 If you're using the built-in helm chart, that value should be set automatically on each of your user code deployments. if you're not, you should ensure that it is set on each one so that the run launcher knows what image to use
m

Mohammad Nazeeruddin

11/18/2021, 3:36 PM
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagsterr
        dagster_home:
          env: DAGSTER_HOME
        job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE 
        image_pull_policy: Always
        postgres_password_secret:
          env: DAGSTER_K8S_PG_PASSWORD_SECRET
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP 
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
i configured this also but got this error :
FileNotFoundError: [Errno 2] No such file or directory: '/home/orchestrator/domains/auw/auw.py'
  File "/usr/local/lib/python3.8/site-packages/dagster/grpc/impl.py", line 75, in core_execute_run
d

daniel

11/18/2021, 3:37 PM
looks like you need to add volume mounts too. that can also be configured in the run launcher
or tagged on the job
❤️ 1
m

Mohammad Nazeeruddin

11/18/2021, 3:38 PM
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagsterr
        dagster_home:
          env: DAGSTER_HOME
        job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE 
        image_pull_policy: Always
        postgres_password_secret:
          env: DAGSTER_K8S_PG_PASSWORD_SECRET
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP 
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
        volume_mounts:
          - name: dagster-pv
            mountPath: /home
            subPath: dagster.yaml
        volumes: 
          - name: dagster-pv
            configMap: 
              name: dagster-instance
this also tried i missed anything ?
d

daniel

11/18/2021, 3:39 PM
You need whatever volume mount provides that file that you listed that it says is missing, I don't know enough about your setup to know what volume mount that is
or wherever that file is supposed to come from - that needs to be configured on the run launcher or already available on the image
Is DAGSTER_K8S_PIPELINE_RUN_IMAGE set to the value of the image that you want to run the pipeline in? If not, you need to supply it from the user code deployment using that DAGSTER_CURRENT_IMAGE environment variable I mentioned above
m

Mohammad Nazeeruddin

11/18/2021, 3:49 PM
Is DAGSTER_K8S_PIPELINE_RUN_IMAGE set to the value of the image that you want to run the pipeline in > yes
Im trying to ran pipelines from this img DAGSTER_K8S_PIPELINE_RUN_IMAGE.i found in dagit pod envs
DAGSTER_CURRENT_IMAGE i did't found this in dagit , is it be in service pods?
d

daniel

11/18/2021, 3:51 PM
dagit doesn't have that environment variable. it needs to be set on your user code deployments, the helm chart is supposed to do it for you if you're using the helm chart: https://sourcegraph.com/github.com/dagster-io/dagster/-/blob/helm/dagster/charts/dagster-user-deployments/templates/deployment-user.yaml?L46
if you're not using the helm chart, you will need to set it yourself
m

Mohammad Nazeeruddin

11/18/2021, 3:55 PM
Okay!, I will try .Thank you.
I hope it will resolve my issue 😊.
Tried with DAGSTER_CURRENT_IMAGE env but this env var created in repo service pods but for dagit i think it's creating (DAGSTER_K8S_PIPELINE_RUN_IMAGE) from values.yaml with below configs.
# `DAGSTER_K8S_PIPELINE_RUN_IMAGE` environment variable will point to the image specified below.
# The run config for the celery executor can set `job_image` to fetch from environment variable
# `DAGSTER_K8S_PIPELINE_RUN_IMAGE`, so that celery workers will launch k8s jobs with said image.
#
####################################################################################################
pipelineRun:
  image:
    # When a tag is not supplied for a Dagster provided image,
    # it will default as the Helm chart version.
    repository: "dagsterimage0/user-code-example-v1"
    tag: "latest"
    pullPolicy: Always
job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE
We configured everything related to our stuff. we are able to create dagit, daemon, repo service pods successfully. and everything working smoothly but when we configured dagster-k8s changes in https://github.com/dagster-io/dagster/blob/master/helm/dagster/templates/configmap-instance.yaml. (for Run Launchers ) We configured below code in this configmap-instance.yaml file.
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagsterr
        dagster_home:
          env: DAGSTER_HOME
        job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE 
        image_pull_policy: Always
        postgres_password_secret:
          env: DAGSTER_K8S_PG_PASSWORD_SECRET
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP 
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
but issue is when we are executing pipelines getting below error.
FileNotFoundError: [Errno 2] No such file or directory: '/home/orchestrator/domains/auw/auw.py'
  File "/usr/local/lib/python3.8/site-packages/dagster/grpc/impl.py", line 75, in core_execute_run
We are access code for repos from efs (file-system) "/home/orchestrator/domains/auw/auw.py" without k8s it's working good. but with k8s getting error. FileNotFoundError (above error). When we are executing pipelines to create k8s-job i m assuming it's looking in dagit pod to execute code from this location : home/orchestrator/domains but here no code available.
d

daniel

11/22/2021, 1:54 PM
Dagit never loads your code
You may need to include the volume mounts on your run launcher config
m

Mohammad Nazeeruddin

11/22/2021, 1:55 PM
Is it correct
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagster
        dagster_home:
          env: DAGSTER_HOME
        job_image: 
          env: DAGSTER_K8S_PIPELINE_RUN_IMAGE 
        image_pull_policy: Always
        postgres_password_secret:
          env: DAGSTER_K8S_PG_PASSWORD_SECRET
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP 
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
        volume_mounts:
          - name: dagster-pv
            mountPath: /home
            subPath: dagster.yaml
        volumes: 
          - name: dagster-pv
            configMap: 
              name: dagster-instance
d

daniel

11/22/2021, 1:58 PM
I’m not sure, I don’t know which mount in your code has that file. But the error seems to indicate that it can’t find that file so some mount may be missing
m

Mohammad Nazeeruddin

11/22/2021, 2:03 PM
file it's means related to configmap or efs moutning config file?
d

daniel

11/22/2021, 2:08 PM
I’m not sure - that seems specific to something you set up in your pod
Dagster just needs the file to be available in the same place it is in the user code deployment pod
j

johann

11/22/2021, 2:14 PM
For what it’s worth you likely have a typo in your service account name
service_account_name: dagsterr
. Probably not related unless you’re mounting EFS in some fancy way
m

Mohammad Nazeeruddin

11/22/2021, 2:22 PM
Actually service account name is dagster it's not a issue with that name actual name is dagster
when i did kubectl get sa i got this account name dagster
Error: failed to start container "dagster-run-54f36bf7-1cf8-47a9-bcf8-05d0dee75c97": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/kubelet/pods/c7b03ad8-aee4-4062-a914-cf35caf7817a/volume-subpaths/dagster-instance/dagster-run-54f36bf7-1cf8-47a9-bcf8-05d0dee75c97/0" to rootfs at "/home/dagster.yaml" caused: open /var/lib/docker/overlay2/d59ddcd943de2a032af0882372bcc0c9255fe6484be4f841f860562a6faa468d/merged/home/dagster.yaml: read-only file system: unknown
Got this error When we tried to mount dagit pod.
run_launcher:
      module: dagster_k8s.launcher
      class: K8sRunLauncher
      config:
        load_incluster_config: true
        job_namespace: 
          env : DAGSTER_K8S_PIPELINE_RUN_NAMESPACE
        service_account_name: dagster
        dagster_home:
          env: DAGSTER_HOME
        job_image: udp-pipeline-orchestrator:v0.0.7
        image_pull_policy: Always
        instance_config_map:
          env: DAGSTER_K8S_INSTANCE_CONFIG_MAP
        env_config_maps:
          - env: DAGSTER_K8S_PIPELINE_RUN_ENV_CONFIGMAP
        volume_mounts:
          - name: dagster-pv-a38
            mountPath: /home
            subPath: dagster.yaml
        volumes: 
          - name: dagster-pv-a38
            configMap: 
              name: dagster-instance