https://dagster.io/ logo
#ask-community
Title
# ask-community
j

Jyoti

08/09/2023, 11:32 AM
Hello dagster community, I am having issues while connecting to gRPC code server location intermittently with error
Copy code
dagster._core.errors.DagsterUserCodeUnreachableError: User code server request timed out due to taking longer than 60 seconds to complete.
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/context.py", line 605, in _load_location
    origin.reload_location(self.instance) if reload else origin.create_location()
the dagster-k8s package deployed on our server is 1.3.14 while our application code uses python version 0.19.14 https://github.com/dagster-io/dagster/blob/master/CHANGES.md#since-1314-core--01914-libraries but it picks up the python version as 3.7 ... is that correct ? or it should point to python 3.10 ?
Copy code
➜ k -n dagster exec -it dagster-daemon-xxx-c dagster -- /bin/bash
root@dagster-daemon-xxx:/# python --version
Python 3.7.17
🤖 1
a

alex

08/10/2023, 2:48 PM
have you updated the helm chart ? what are the tags on the images in your deployment set to?
j

Jyoti

08/11/2023, 6:14 AM
@alex Thanks for your response ...We didn't update the helm chart this issue occurs intermittently for us these are the labels on the
daemon
pod
Copy code
labels:
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: dagster
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: dagster
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: 1.3.14
    component: dagster-daemon
    deployment: daemon
    <http://helm.sh/chart|helm.sh/chart>: dagster-1.3.14
    <http://helm.toolkit.fluxcd.io/name|helm.toolkit.fluxcd.io/name>: dagster
    <http://helm.toolkit.fluxcd.io/namespace|helm.toolkit.fluxcd.io/namespace>: dagster
and this is the deployment yaml
Copy code
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    <http://deployment.kubernetes.io/revision|deployment.kubernetes.io/revision>: "63"
    <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: dagster
    <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: dagster
  creationTimestamp: "2023-07-11T13:41:02Z"
  generation: 63
  labels:
    <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: dagster
    <http://app.kubernetes.io/managed-by|app.kubernetes.io/managed-by>: Helm
    <http://app.kubernetes.io/name|app.kubernetes.io/name>: dagster
    <http://app.kubernetes.io/version|app.kubernetes.io/version>: 1.3.14
    component: dagster-daemon
    deployment: daemon
    <http://helm.sh/chart|helm.sh/chart>: dagster-1.3.14
    <http://helm.toolkit.fluxcd.io/name|helm.toolkit.fluxcd.io/name>: dagster
    <http://helm.toolkit.fluxcd.io/namespace|helm.toolkit.fluxcd.io/namespace>: dagster
  name: dagster-daemon
  namespace: dagster
  resourceVersion: "596064551"
  uid: 4165c00b-ad9d-43be-8bcd-927a672d2b66
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: dagster
      <http://app.kubernetes.io/name|app.kubernetes.io/name>: dagster
      component: dagster-daemon
      deployment: daemon
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        checksum/dagster-instance: a22ce4aabbc791b8c4850bb8db2dba9b4b8cd6bc70a9159a0d16f53f893d01da
        checksum/dagster-workspace: 266685774fa1fff128cae84f86b7ce73dec695ab052b3b831ddb058e2368e6fb
        <http://kubectl.kubernetes.io/restartedAt|kubectl.kubernetes.io/restartedAt>: "2023-08-09T13:01:28+02:00"
      creationTimestamp: null
      labels:
        <http://app.kubernetes.io/instance|app.kubernetes.io/instance>: dagster
        <http://app.kubernetes.io/name|app.kubernetes.io/name>: dagster
        component: dagster-daemon
        deployment: daemon
    spec:
      affinity: {}
      containers:
      - command:
        - /bin/bash
        - -c
        - dagster-daemon run -w /dagster-workspace/workspace.yaml
        env:
        - name: DAGSTER_PG_PASSWORD
          valueFrom:
            secretKeyRef:
              key: postgresql-password
              name: postgresql-secret
        - name: DAGSTER_DAEMON_HEARTBEAT_TOLERANCE
          value: "300"
        envFrom:
        - configMapRef:
            name: dagster-daemon-env
        image: <http://docker.io/dagster/dagster-k8s:1.3.14|docker.io/dagster/dagster-k8s:1.3.14>
        imagePullPolicy: Always
        name: dagster
        resources: {}
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /opt/dagster/dagster_home/dagster.yaml
          name: dagster-instance
          subPath: dagster.yaml
        - mountPath: /dagster-workspace/
          name: dagster-workspace-yaml
      dnsPolicy: ClusterFirst
      initContainers:
      - command:
        - sh
        - -c
        - until pg_isready -h "<http://dagster-hydrogen-dev-default.cwqggki6ials.eu-west-1.rds.amazonaws.com|dagster-hydrogen-dev-default.cwqggki6ials.eu-west-1.rds.amazonaws.com>"
          -p 5432 -U dagster; do echo waiting for database; sleep 2; done;
        image: library/postgres:14.6
        imagePullPolicy: IfNotPresent
        name: check-db-ready
        resources: {}
        securityContext: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      nodeSelector:
        role: dask
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: dagster
      serviceAccountName: dagster
      terminationGracePeriodSeconds: 30
      tolerations:
      - effect: NoSchedule
        key: dask
        operator: Equal
        value: "true"
      volumes:
      - configMap:
          defaultMode: 420
          name: dagster-instance
        name: dagster-instance
      - configMap:
          defaultMode: 420
          name: dagster-workspace-yaml
        name: dagster-workspace-yaml
status:
  availableReplicas: 1
  conditions:
  - lastTransitionTime: "2023-08-10T18:18:07Z"
    lastUpdateTime: "2023-08-10T18:18:07Z"
    message: Deployment has minimum availability.
    reason: MinimumReplicasAvailable
    status: "True"
    type: Available
  - lastTransitionTime: "2023-07-11T13:41:02Z"
    lastUpdateTime: "2023-08-10T18:18:07Z"
    message: ReplicaSet "dagster-daemon-6d68cc886d" has successfully progressed.
    reason: NewReplicaSetAvailable
    status: "True"
    type: Progressing
  observedGeneration: 63
  readyReplicas: 1
  replicas: 1
  updatedReplicas: 1
this is the helm chart we are using
a

alex

08/11/2023, 2:17 PM
ah ya
1.3.14
was still on
python 3.7
, it got bumped to
3.10
in
1.4.0
commit here https://github.com/dagster-io/dagster/commit/3d9baaa7b5e201f16fd41c7741c8fbcaf3536f29
j

Jyoti

08/11/2023, 2:18 PM
a

alex

08/11/2023, 3:14 PM
yep that “since 1.3.14” section communicates what changed in
1.4.0
compared to
1.3.14
where the section above it highlights all major changes since
1.3.0