Hello (again) - still trying to get my head around...
# deployment-kubernetes
j
Hello (again) - still trying to get my head around the Dockerfile setup for my own UserDeployment (the "k8s-example" works fine). I'm using the example Dockerfile from k8s-example and there's a section
# ==> Add user code layer
which I'm adding in my
pip install
dependencies, but other than that what else should be included? The example project and Dockerfile seem to work fine as is but with all my tinkering my pod still errors out
kubectl get pods
showing
CrashLoopBackOff
for my new pod and the log below. I'm also able to successful run the deploy_docker example locally
Copy code
...
  Normal   Pulled     11s (x3 over 26s)  kubelet            Successfully pulled image "<http://account.dkr.ecr.region.amazonaws.com/dagster-test:latest|account.dkr.ecr.region.amazonaws.com/dagster-test:latest>"
  Normal   Created    10s (x3 over 26s)  kubelet            Created container dagster
  Normal   Started    10s (x3 over 26s)  kubelet            Started container dagster
  Warning  BackOff    9s (x5 over 24s)   kubelet            Back-off restarting failed container
If it helps I'm using AWS EKS Managed but I think this is just my issue with setting up Dockerfile. I'm trying to retrofit the dbt_example into the k8s_example
c
I did the same thing earlier today and initially had the pod failing, but it turns out I had set up the directory structure slightly differently than what the default Helm
values.yaml
was expecting. I could find out the exact error by
kubectl logs <pod>
.
If it helps, here’s my Dockerfile:
Copy code
FROM python:3.7.8-slim

ARG DAGSTER_VERSION=0.10.1

# ==> Add Dagster layer
RUN \
# Cron
       apt-get update -yqq \
    && apt-get install -yqq cron \
# Dagster
    && pip install \
        dagster==${DAGSTER_VERSION} \
        dagster-postgres==${DAGSTER_VERSION} \
        dagster-celery[flower,redis,kubernetes]==${DAGSTER_VERSION} \
        dagster-aws==${DAGSTER_VERSION} \
        dagster-k8s==${DAGSTER_VERSION} \
        dagster-celery-k8s==${DAGSTER_VERSION} \
# Cleanup
    &&  rm -rf /var \
    &&  rm -rf /root/.cache  \
    &&  rm -rf /usr/lib/python2.7 \
    &&  rm -rf /usr/lib/x86_64-linux-gnu/guile

# ==> Add user code layer
# Example pipelines
COPY . /
m
Also check that you are overriding the dagsterApiGrpcArgs helm value correctly - you will need to point to the file in your user code image that that defines your @repo https://github.com/dagster-io/dagster/blob/7ed9c52eb1d30d0aea99e4e9339de3d0bc5c3035/helm/dagster/values.yaml#L187
j
Awesome! thanks everyone -
kubectl logs
is a life saver. Got it working with a simple file/folder structure, then got into issues with relative paths and modules but at the moment I think everything is working. I need to get a refresher on python packages, modules and setup.py 🙂
🙌 1