https://dagster.io/ logo
Title
n

Noah Sanor

07/02/2021, 3:27 PM
Hi, I'm trying to get dagster deployed to my local kind k8s cluster and I'm having some issues. After installing the helm chart and navigating to local dagit, I see an error in my repository:
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses" debug_error_string = "{"created":"@1625239477.655628200","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3008,"referenced_errors":[{"created":"@1625239477.655621700","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":397,"grpc_status":14}]}" >
. Code and more details in 🧵 .
My user code Dockerfile (taken from the docs):
ARG BASE_IMAGE
FROM "${BASE_IMAGE}"

ARG DAGSTER_VERSION

# ==> Add Dagster layer
RUN \
# Cron
       apt-get update -yqq \
    && apt-get install -yqq cron 

RUN \
# Dagster
    pip install \
        dagster==${DAGSTER_VERSION} \
        dagster-postgres==${DAGSTER_VERSION} \
        dagster-celery[flower,redis,kubernetes]==${DAGSTER_VERSION} \
        dagster-aws==${DAGSTER_VERSION} \
        dagster-k8s==${DAGSTER_VERSION} \
        dagster-celery-k8s==${DAGSTER_VERSION} \
# Cleanup
    &&  rm -rf /var \
    &&  rm -rf /root/.cache  \
    &&  rm -rf /usr/lib/python2.7 \
    &&  rm -rf /usr/lib/x86_64-linux-gnu/guile

COPY data_pipelines/ /
values.yaml:
dagster-user-deployments:
  enabled: true
  deployments:
    - name: "user-code"
      image:
        repository: "dagster-user-code"
        tag: latest
        pullPolicy: Always
      dagsterApiGrpcArgs:
        - "-f"
        - "/repository.py"
      port: 3030
I am able to run this locally outside of the cluster without any issues.
d

daniel

07/02/2021, 5:02 PM
hi noah - are there any logs from your user code deployment pod / does it show up as a running pod in kubectl? would it be possible to paste the output of 'kubectl get pods' when pointed at your cluster?
n

Noah Sanor

07/02/2021, 6:05 PM
Looks like I'm having some issues resolving module paths:
dagster.core.errors.DagsterImportError: Encountered ImportError: `No module named '{REDACTED}'` while importing module repository from file /data_pipelines/repository.py. Local modules were resolved using the working directory `/`. If another working directory should be used, please explicitly specify the appropriate path using the `-d` or `--working-directory` for CLI based targets or the `working_directory` configuration option for `python_file`-based workspace.yaml targets.
d

daniel

07/02/2021, 6:05 PM
ah interesting - but you don't see it when you start the container locally?
n

Noah Sanor

07/02/2021, 6:07 PM
When getting it working locally, I'm running outside of a container
d

daniel

07/02/2021, 6:09 PM
ah got it. Bit tricky to evaluate this without seeing the code - bu the module should be included in that COPY step at the end I assume?
n

Noah Sanor

07/02/2021, 6:22 PM
Yea, I understand. The copy does grab all files, and I can confirm they are in the container. Anyway, I think I can take it from here. Thanks for the help!
:condagster: 1
s

Solaris Wang

12/28/2021, 6:47 PM
@daniel @Noah Sanor new to dagster but wondering if this is resolved? i’m running dagster in minikube and have possibly this same relative imports issue. my repo structure conceptually is workspace/ -repo_1/ --repo.py --jobs/ ---module.py -repo_2/ --repo.py .. however the pods running my user code dies due to relative import error (pic 1) in workspaces (pic 2) i’ve tried python_file, python_package, adding working_directory, but the working directory in the terminal screenshot has not changed from ‘/’, making me think workspace.yaml isn’t related to whatever path is actually being used to resolve the import. hope that made sense
if it helps give a clue, the import issue goes away and my pods run fine if in repo.py i change
from jobs.module import module
to the more fully qualified path of`from workspace_cbh.repo_dmg.jobs.module import module`
d

daniel

12/28/2021, 9:18 PM
Hi Solaris - are you sure the changes you made to your workspace.yaml are getting picked up? asking because the workspace.yaml in your screenshot isn't in a valid format (it says
python_package:
but the keys below it like
relative_path
are for
python_file:
), in a way that should be crashing dagit on startup with something like 'Errors while loading workspace config'. If that isn't happening it may indicate that your changes to workspace.yaml aren't getting picked up for some reason.
s

Solaris Wang

12/29/2021, 12:37 AM
@daniel yes, i suspect dagit is not able to find workspace - i read somewhere that dagit has to be in the same directory to see workspace.yaml. i fixed the workspace yaml and tested it locally fine, still no luck in the pods. should i set
DAGSTER_HOME
or some spec in the helm chart to reference the workspace path?
d

daniel

12/29/2021, 12:41 AM
If you navigate to the workspace tab in dagit it should tell you what it thinks the config is for each location in the workspace - you can use that to sanity check that dagit is actually picking up changes to your workspace.yaml (for example, if a working directory is set, that should show up on that tab) How are you putting the workspace.yaml in the dagit pod, is it copied in the Dockerfile or maybe mounted as a volume? If it's the former you might need to rebuild the image every time you make changes to the workspace.yaml
s

Solaris Wang

12/29/2021, 1:02 AM
not seeing the working directory in UI. its copied by dockerfile (
COPY . /
) and the image is rebuilt to minikube with modifications to anything (workspace, helm, py etc). pods also restarted for good measure
d

daniel

12/29/2021, 1:33 AM
Ah, ok, I think I see what's going on here. Are you possibly using the helm chart? If you are, the way that the workspace is configured is a little different, and is described here: https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#configure-your-user-deployment You shouldn't need to create your own workspace.yaml if you are using the helm chart - the helm chart creates it for you based on what's set in your
dagster-user-deployments
in the helm chart. The way to set a working directory there is to add a
- "--working-directory"
- "workspace_cbh/repo_dag/jobs"
under
dagsterApiGrpcArgs
. If that's right, sorry about sending you in the wrong direction originally - this could be a lot clearer in the docs I think.
The reason it looks different and is configured differently is that when you're using the helm chart, dagit doesn't worry about spinning up the code servers that give it information about your code (the way that it does if you use
python_file
or
python_package
in your workspace.yaml) - instead, it assumes that the helm chart has created those servers separately, and just needs to access them via hostname and port
s

Solaris Wang

12/29/2021, 2:12 AM
IT WORKED! thus concludes 2 days of ongoing research :blob_tired: thank you!! re: your explanation, does that mean other cli args would work as well if you added them to the helm chart in the same area?
d

daniel

12/29/2021, 2:33 AM
glad it worked out 🙂 If you run
dagster api grpc --help
it will give you all the arguments you can use there.
❤️ 1
f

Frank Lu

02/16/2022, 8:17 AM
i’m also using the helm chart deployment and I’m running into the same issue. here’s my repo structure
app
├── Dockerfile
├── Makefile
├── README.md
├── basic_ml
│   ├── README.md
│   ├── basic_ml.py
│   └── repository.py
└── workspace.yaml
the files are using relative imports and works fine while spinning up the dagit server locally, but when I used the helm chart with these grpc args
- args:
        - api
        - grpc
        - -h
        - 0.0.0.0
        - -p
        - "3030"
        - --package-name
        - basic_ml
        - --working-directory
        - /app/basic_ml
It’s still giving me the error
dagster.core.errors.DagsterImportError: Encountered ImportError: `No module named 'basic_ml'` while importing module basic_ml. Local modules were resolved using the working directory `/app/basic_ml`. If another working directory should be used, please explicitly specify the appropriate path using the `-d` or `--working-directory` for CLI based targets or the `working_directory` configuration option for workspace targets.
I’m using poetry btw in my dockerfile to install all my dependencies instead of pip.
d

daniel

02/16/2022, 12:58 PM
Hi frank - would you mind sharing the contents of your workspace.yaml file that's working when you run dagit locally?