I’m trying to set up a simple k8s deployment in mi...
# deployment-kubernetes
j
I’m trying to set up a simple k8s deployment in minikube with my own user code built on a distroless based image. I’ve done a
helm install
and get all 4 pods (dagit, daemon, user-code, postgres) standing up without error from k8s’ perspective. When I open the dagit UI however the status of the user-code location has that oh so annoying
gRPC Error code: UNAVAILABLE
error message. When I do a
kubectl logs <user_code_pod>
I get no logs returned, not a single log line. This is odd because I thought I’d at least see something like
Started Dagster code server for file /example_project/example_repo/repo.py on port 3030 in process 1
. In a separate isolated docker container I have proven that the
dagster api grpc
command is working and will kick of the server with the expected message above, but I don’t see it in my helm deployment. My
values.yaml
file is minimally altered from the k8s deployment docs example — only enough to add some secrets and point to my local custom user-code image. I have no idea where to look next to debug. Any thoughts?
j
does describing the pod give you any additional info? these pod is indeed running and not crash looping?
j
no crashing loop
kubectl describe pod
has not produced any other messages that seem to help other than confirming that it’s start up command is correct and that it just actually spin up without error.
The only error I see among the containers is from the dagit container:
Copy code
Readiness probe failed: Get "<http://172.17.0.9:80/dagit_info>": dial tcp 172.17.0.9:80: connect: connection refused
ah, I think I figured it out. It seems that I was not in the right
kubectl
context. It was similar but in a different namespace, so I wonder if dagit was trying to connect to a pod that didn’t exist based on the misconfigured namespace. All I had to do was switch to the right context and it started working again.
🙌 1