Jakob
03/03/2023, 7:34 PMdagit.workspace.enabled: true
and the servers specified, but i want the other team to be able to add/remove without me being a bottleneck so want to move that to the server information to an externalConfigmap
that they can manage and deploy as they develop and add new servers.
I am testing by just deploying a configmap that looks like the one made by the helm chart eg:
apiVersion: v1
kind: ConfigMap
metadata:
name: dagster-workspace
namespace: dagster-cd
data:
workspace.yaml: |
load_from:
- grpc_server:
host: "k8s-example-user-code-1"
port: 3030
location_name: "user-code-example"
and setting externalConfigmap: "dagster-workspace"
in the dagit values, but that is not working an im getting Crashloop backoffs.
The issue appears to be that the deployment is still trying to mount the dagster-workspace-yaml configmap that would be made if i had information entered in servers
and isn't being told to load the configmap I'm passing in at externalConformap
daniel
03/03/2023, 7:37 PMJakob
03/03/2023, 7:38 PMdaniel
03/03/2023, 7:38 PMJakob
03/03/2023, 7:40 PMdagster-dagit-5fc6f96447-k6h99
daniel
03/03/2023, 7:40 PMJakob
03/03/2023, 7:41 PMdaniel
03/03/2023, 7:41 PM{{- define "dagit.workspace.configmapName" -}}
{{- $dagitWorkspace := .Values.dagit.workspace }}
{{- if and $dagitWorkspace.enabled $dagitWorkspace.externalConfigmap }}
{{- $dagitWorkspace.externalConfigmap -}}
{{- else -}}
{{ template "dagster.fullname" . }}-workspace-yaml
{{- end -}}
{{- end -}}
Jakob
03/03/2023, 7:43 PMworkspace:
enabled: true
servers: []
externalConfigmap: "dagster-workspace"
...
dagster-user-deployments:
enabled: true
enableSubchart: false
apiVersion: v1
kind: ConfigMap
metadata:
name: dagster-workspace
namespace: dagster-cd
data:
workspace.yaml: |
load_from:
- grpc_server:
host: "k8s-example-user-code-1"
port: 3030
location_name: "user-code-example"
deployments:
- name: "k8s-example-user-code-1"
image:
repository: "<http://docker.io/dagster/user-code-example|docker.io/dagster/user-code-example>"
tag: ~
pullPolicy: Always
dagsterApiGrpcArgs:
- "-f"
- "/example_project/example_repo/repo.py"
port: 3030
daniel
03/03/2023, 7:45 PMJakob
03/03/2023, 7:46 PMdaniel
03/03/2023, 7:46 PMdagit
part)
dagit:
workspace:
enabled: true
servers: []
externalConfigmap: "dagster-workspace"
Jakob
03/03/2023, 7:47 PMdaniel
03/03/2023, 7:48 PMJakob
03/03/2023, 7:49 PMdaniel
03/03/2023, 7:50 PMJakob
03/03/2023, 7:52 PMdaniel
03/03/2023, 7:53 PMJakob
03/03/2023, 7:59 PMdaniel
03/03/2023, 8:03 PMJakob
03/03/2023, 8:03 PMdaniel
03/03/2023, 8:03 PMJakob
03/03/2023, 8:04 PMReadiness probe failed: Get "<http://172.22.89.219:80/dagit_info>": dial tcp 172.22.89.219:80: connect: connection refused
daniel
03/03/2023, 8:05 PMapiVersion: v1
kind: ConfigMap
metadata:
name: dagster-workspace
namespace: dagster-cd
data:
workspace.yaml: |
load_from:
- grpc_server:
host: "k8s-example-user-code-1"
port: 3030
location_name: "user-code-example"
try
apiVersion: v1
kind: ConfigMap
metadata:
name: dagster-workspace
namespace: dagster-cd
data:
workspace.yaml: |
load_from:
- grpc_server:
host: "k8s-example-user-code-1"
port: 3030
location_name: "user-code-example"
(maybe unrelated to the crash loop)Jakob
03/03/2023, 8:06 PMName: dagster-dagit-7cbd4ccdcf-cn2zw
Namespace: dagster-cd
Priority: 0
Service Account: dagster
Node: ip-172-22-89-92.ec2.internal/172.22.89.92
Start Time: Fri, 03 Mar 2023 14:01:49 -0600
Labels: <http://app.kubernetes.io/instance=dagster|app.kubernetes.io/instance=dagster>
<http://app.kubernetes.io/name=dagster|app.kubernetes.io/name=dagster>
component=dagit
pod-template-hash=7cbd4ccdcf
Annotations: checksum/dagster-instance: 7dbdd4411fb97a93bc73cc4d6d2c7a516a167c32e5847ef00b1bd397487c8a74
checksum/dagster-workspace: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
<http://kubernetes.io/psp|kubernetes.io/psp>: eks.privileged
Status: Running
IP: 172.22.89.219
IPs:
IP: 172.22.89.219
Controlled By: ReplicaSet/dagster-dagit-7cbd4ccdcf
Init Containers:
check-db-ready:
Container ID: <docker://6c4981d742229d7168cd6402b73cbb6accec4a9dca4766c780090eb344f5f9a>c
Image: library/postgres:14.6
Image ID: <docker-pullable://postgres@sha256>:f565573d74aedc9b218e1d191b04ec75bdd50c33b2d44d91bcd3db5f2fcea647
Port: <none>
Host Port: <none>
Command:
sh
-c
until pg_isready -h dagster-postgresql -p 5432 -U test; do echo waiting for database; sleep 2; done;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 03 Mar 2023 14:01:50 -0600
Finished: Fri, 03 Mar 2023 14:02:08 -0600
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlrg9 (ro)
Containers:
dagster:
Container ID: <docker://1d40ea4ed02abf5219f90b0835589f2863f926c82aaa2a741b73d5d9d31e076>3
Image: <http://docker.io/dagster/dagster-celery-k8s:1.1.20|docker.io/dagster/dagster-celery-k8s:1.1.20>
Image ID: <docker-pullable://dagster/dagster-celery-k8s@sha256:ddc3b429602d6fda0803a738bc8a52d97aab95bec0f98e3a414423f069edde9c>
Port: 80/TCP
Host Port: 0/TCP
Command:
/bin/bash
-c
dagit -h 0.0.0.0 -p 80 -w /dagster-workspace/workspace.yaml
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 03 Mar 2023 14:02:27 -0600
Finished: Fri, 03 Mar 2023 14:02:29 -0600
Ready: False
Restart Count: 2
Readiness: http-get http://:80/dagit_info delay=0s timeout=3s period=20s #success=1 #failure=3
Environment Variables from:
dagster-dagit-env ConfigMap Optional: false
Environment:
DAGSTER_PG_PASSWORD: <set to the key 'postgresql-password' in secret 'dagster-postgresql-secret'> Optional: false
Mounts:
/dagster-workspace/ from dagster-workspace-yaml (rw)
/opt/dagster/dagster_home/dagster.yaml from dagster-instance (rw,path="dagster.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlrg9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
dagster-instance:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dagster-instance
Optional: false
dagster-workspace-yaml:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: dagster-workspace-yaml
Optional: false
kube-api-access-dlrg9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <http://node.kubernetes.io/not-ready:NoExecute|node.kubernetes.io/not-ready:NoExecute> op=Exists for 300s
<http://node.kubernetes.io/unreachable:NoExecute|node.kubernetes.io/unreachable:NoExecute> op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 56s default-scheduler Successfully assigned dagster-cd/dagster-dagit-7cbd4ccdcf-cn2zw to ip-172-22-89-92.ec2.internal
Normal Pulled 55s kubelet Container image "library/postgres:14.6" already present on machine
Normal Created 55s kubelet Created container check-db-ready
Normal Started 55s kubelet Started container check-db-ready
Normal Pulled 36s kubelet Successfully pulled image "<http://docker.io/dagster/dagster-celery-k8s:1.1.20|docker.io/dagster/dagster-celery-k8s:1.1.20>" in 97.909065ms
Normal Pulled 33s kubelet Successfully pulled image "<http://docker.io/dagster/dagster-celery-k8s:1.1.20|docker.io/dagster/dagster-celery-k8s:1.1.20>" in 97.175577ms
Normal Pulling 18s (x3 over 36s) kubelet Pulling image "<http://docker.io/dagster/dagster-celery-k8s:1.1.20|docker.io/dagster/dagster-celery-k8s:1.1.20>"
Normal Created 18s (x3 over 36s) kubelet Created container dagster
Normal Started 18s (x3 over 36s) kubelet Started container dagster
Normal Pulled 18s kubelet Successfully pulled image "<http://docker.io/dagster/dagster-celery-k8s:1.1.20|docker.io/dagster/dagster-celery-k8s:1.1.20>" in 91.269323ms
Warning Unhealthy 17s (x4 over 35s) kubelet Readiness probe failed: Get "<http://172.22.89.219:80/dagit_info>": dial tcp 172.22.89.219:80: connect: connection refused
Warning BackOff 1s (x5 over 31s) kubelet Back-off restarting failed container
daniel
03/03/2023, 8:07 PMJakob
03/03/2023, 8:08 PMDefaulted container "dagster" out of: dagster, check-db-ready (init)
Traceback (most recent call last):
File "/usr/local/bin/dagit", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/dagit/cli.py", line 225, in main
cli(auto_envvar_prefix="DAGIT") # pylint:disable=E1120
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1055, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 760, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/dagit/cli.py", line 173, in dagit
code_server_log_level=code_server_log_level,
File "/usr/local/lib/python3.7/site-packages/dagster/_cli/workspace/cli_target.py", line 276, in get_workspace_process_context_from_kwargs
code_server_log_level=code_server_log_level,
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/context.py", line 495, in __init__
{origin.location_name: self._load_location(origin) for origin in self._origins}
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/context.py", line 504, in _origins
return self._workspace_load_target.create_origins() if self._workspace_load_target else []
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/load_target.py", line 41, in create_origins
return location_origins_from_yaml_paths(self.paths)
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/load.py", line 47, in location_origins_from_yaml_paths
for k, v in location_origins_from_config(cast(Dict, workspace_config), yaml_path).items():
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/load.py", line 56, in location_origins_from_config
workspace_config = ensure_workspace_config(workspace_config, yaml_path)
File "/usr/local/lib/python3.7/site-packages/dagster/_core/workspace/config_schema.py", line 39, in ensure_workspace_config
workspace_config,
dagster._core.errors.DagsterInvalidConfigError: Errors while loading workspace config from /dagster-workspace/workspace.yaml.
Error 1: You can only specify a single field at path root:load_from[0]. You specified ['grpc_server', 'host', 'location_name', 'port']. The available fields are ['grpc_server', 'python_file', 'python_module', 'python_package']
daniel
03/03/2023, 8:08 PMJakob
03/03/2023, 8:08 PM"dagster-workspace-yaml"
though..daniel
03/03/2023, 8:09 PMJakob
03/03/2023, 8:10 PMdaniel
03/03/2023, 8:12 PMJakob
03/03/2023, 8:12 PM