I'm running into the following issue (shown below)...
# deployment-kubernetes
v
I'm running into the following issue (shown below)? I tried connect to the dagster_user_deplooyment pod to run the command
dagster instance migrate
as suggested below but that didnt work. Has anyone dealt with this issue before? I'm assuming the cause has been a version upgrade of dagster
d
Hi Vax, it does indeed look like you need to migrate - could you elaborate on how it didn't work when you tried to?
v
sure. basically these are the steps i tried to do it 1. Connected to the dagster user deployment pod
Copy code
kubectl exec -it dagster-release-dagster-user-deployments-******  -- /bin/bash
2. Ran
dagster instance migrate
but ran into issues because
/opt/dagster/dagster_home
didnt exist ($DAGSTER_HOME) 3. Created an empty directory for
/opt/dagster/dagster_home
and reran
dagster instance migrate
One thing to note is that the dagster folder, helm value property and other files are stored on root
d
Could you try following the guide here? It has a different command youcan run to do the migration https://docs.dagster.io/deployment/guides/kubernetes/how-to-migrate-your-instance
v
Hey Daniel, sorry for getting back so late on this. I followed the instructions but ran into an image issue at the end
Copy code
va6107207@MTQS0X21 % helm list                                                                      
NAME           	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART         	APP VERSION
dagster-release	default  	103     	2021-09-01 11:17:54.325329 -0400 EDT	deployed	dagster-0.12.7	0.12.7     
va6107207@MTQS0X21 % helm template dagster-release dagster/dagster \                                
    --set "migrate.enabled=true" \
    --show-only templates/job-instance-migrate.yaml \
    --values helm_values_prod.yaml \
    | kubectl apply -f -
job.batch/dagster-release-dagit-instance-migrate created
va6107207@MTQS0X21 % kubectl get pods | head -5                     
NAME                                                              READY   STATUS             RESTARTS   AGE
dagster-release-daemon-76b5dd448c-zlnn8                           1/1     Running            0          4m50s
dagster-release-dagit-59cccb47d-5cfwc                             1/1     Running            0          4m50s
dagster-release-dagit-instance-migrate-54hlg                      0/1     InvalidImageName   0          5s
dagster-release-dagster-user-deployments-crave-repository-5k2wf   1/1     Running            0          45h
this is the main error when running
kubectl describe pod
Copy code
Warning  Failed         63s (x12 over 3m13s)  kubelet            Error: InvalidImageName
  Warning  InspectFailed  49s (x13 over 3m13s)  kubelet            Failed to apply default image tag "<http://docker.io/dagster/dagster-celery-k8s:|docker.io/dagster/dagster-celery-k8s:>": couldn't parse image reference "<http://docker.io/dagster/dagster-celery-k8s:|docker.io/dagster/dagster-celery-k8s:>": invalid reference format
r
looks like there’s a small error here with how we’re handling the tag when it is null. This should be fixed by explicitly specifying the dagit image tag to be 0.12.7, your current helm chart version. Could you try running? We’ll fix this issue in 0.12.8.
Copy code
helm template dagster-release dagster/dagster \                                
    --set "migrate.enabled=true" \
    --set "dagit.image.tag=0.12.7" \
    --show-only templates/job-instance-migrate.yaml \
    --values helm_values_prod.yaml \
    | kubectl apply -f -
v
YUP! that did the trick. The migration job ran successfully and the error posted earlier is gone. Thank you for the help @rex @daniel
r