Jeremy Fisher
06/24/2022, 11:57 PMgoogle.api_core.exceptions.Forbidden: 403 GET <https://storage.googleapis.com/storage/v1/b/dagster-io-manager-artifacts?fields=name&prettyPrint=false>: <mailto:1045208284552-compute@developer.gserviceaccount.com|1045208284552-compute@developer.gserviceaccount.com> does not have storage.buckets.get access to the Google Cloud Storage bucket.
File "/opt/conda/lib/python3.8/site-packages/dagster/core/errors.py", line 184, in user_code_error_boundary
yield
File "/opt/conda/lib/python3.8/site-packages/dagster/core/execution/resources_init.py", line 310, in single_resource_event_generator
resource_def.resource_fn(context)
File "/opt/conda/lib/python3.8/site-packages/dagster_gcp/gcs/io_manager.py", line 121, in gcs_pickle_io_manager
pickled_io_manager = PickledObjectGCSIOManager(
File "/opt/conda/lib/python3.8/site-packages/dagster_gcp/gcs/io_manager.py", line 21, in __init__
check.invariant(self.bucket_obj.exists())
File "/opt/conda/lib/python3.8/site-packages/google/cloud/storage/bucket.py", line 822, in exists
client._get_resource(
File "/opt/conda/lib/python3.8/site-packages/google/cloud/storage/client.py", line 349, in _get_resource
return self._connection.api_request(
File "/opt/conda/lib/python3.8/site-packages/google/cloud/storage/_http.py", line 80, in api_request
return call()
File "/opt/conda/lib/python3.8/site-packages/google/api_core/retry.py", line 283, in retry_wrapped_func
return retry_target(
File "/opt/conda/lib/python3.8/site-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/opt/conda/lib/python3.8/site-packages/google/cloud/_http/__init__.py", line 494, in api_request
raise exceptions.from_http_response(response)
Looking at the job itself seems to demonstrate the serviceAccount
is correct but the managed fields is not:
managedFields:
- apiVersion: batch/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:template:
f:spec:
f:serviceAccount: {} # 👈
f:serviceAccountName: {}
# ...snip...
spec:
template:
metadata:
annotations:
<http://cluster-autoscaler.kubernetes.io/safe-to-evict|cluster-autoscaler.kubernetes.io/safe-to-evict>: "false"
creationTimestamp: null
labels:
spec:
containers:
- image: my-cool-dagster-repo
imagePullPolicy: Always
name: dagster
restartPolicy: Never
serviceAccount: dagster # 👈
serviceAccountName: dagster
And, in fact, the helm.yaml
has the right service account 🤔
---
global:
postgresqlSecretName: "dagster-postgresql-secret"
dagsterHome: "/opt/dagster/dagster_home"
# A service account name to use for this chart and all subcharts. If this is set, then
# dagster subcharts will not create their own service accounts.
serviceAccountName: "dagster"
I wonder if this has to do with upgrading to dagster 0.15? I'm not quite certain, but the problem seems to have started after I upgraded.Jeremy Fisher
06/24/2022, 11:59 PMgsutil ls
on the dagster-io-manager-artifacts bucket. It seems to only be an issue with jobs?daniel
06/25/2022, 1:01 AMdaniel
06/25/2022, 1:02 AMJeremy Fisher
06/25/2022, 1:03 AMJeremy Fisher
06/25/2022, 1:03 AMmarcos
06/25/2022, 1:30 AMdaniel
06/25/2022, 1:40 AMmarcos
06/25/2022, 1:42 AMdaniel
06/25/2022, 1:43 AMdaniel
06/25/2022, 1:45 AMmarcos
06/25/2022, 1:46 AMdaniel
06/25/2022, 1:47 AMdaniel
06/25/2022, 1:48 AMmarcos
06/25/2022, 1:49 AMgcloud iam service-accounts add-iam-policy-binding \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:$GOOGLE_CLOUD_PROJECT.svc.id.goog[default/dagster]" \
dagster@$<http://GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com|GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com>;
kubectl annotate serviceaccount \
dagster \
<http://iam.gke.io/gcp-service-account=dagster@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com;|iam.gke.io/gcp-service-account=dagster@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com;>
marcos
06/25/2022, 1:49 AMdaniel
06/25/2022, 1:50 AMdaniel
06/25/2022, 1:51 AMmarcos
06/25/2022, 1:51 AMJeremy Fisher
06/27/2022, 6:43 PMdaniel
06/27/2022, 6:46 PMdagster-user-deployments:
enabled: true
deployments:
- name: "k8s-example-user-code-1"
image:
repository: "<http://docker.io/dagster/user-code-example|docker.io/dagster/user-code-example>"
tag: latest
pullPolicy: Always
dagsterApiGrpcArgs:
- "--python-file"
- "/example_project/example_repo/repo.py"
port: 3030
includeConfigInLaunchedRuns:
enabled: false
didn't fix it then i'm not totally sure what's going on - changing the default there from false to true was the only significant change to the helm chart that I'm aware of between 0.14.20 and 0.15.0daniel
06/27/2022, 6:47 PMJeremy Fisher
06/27/2022, 6:48 PMdagster-user-deployments:
enabled: true
enableSubchart: true
deployments:
- name: "dg-workspace"
image:
repository: "<http://gcr.io/foo/dagster|gcr.io/foo/dagster>"
tag: "1.0.62"
pullPolicy: Always
envSecrets:
- name: run-worker-secrets
- name: slack-creds
annotations: {}
nodeSelector:
<http://iam.gke.io/gke-metadata-server-enabled|iam.gke.io/gke-metadata-server-enabled>: "true"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "api"
operator: "In"
values: ["yes"]
tolerations:
- key: "api"
operator: "Equal"
value: "yes"
effect: "NoSchedule"
podSecurityContext: {}
securityContext: {}
resources: {}
includeConfigInLaunchedRuns:
enabled: false # 👈
replicaCount: 1
livenessProbe:
initialDelaySeconds: 0
periodSeconds: 20
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
startupProbe:
enabled: true
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 3
service:
annotations: {}
dagsterApiGrpcArgs:
- "--python-file"
- "/opt/dagster/dagster_home/dg/repo.py"
- "-p"
- "4000"
port: 4000
daniel
06/27/2022, 6:51 PMJeremy Fisher
06/27/2022, 6:54 PMJeremy Fisher
06/27/2022, 6:55 PMf:serviceAccount: {}
f:serviceAccountName: {}
Jeremy Fisher
06/27/2022, 6:55 PMdaniel
06/27/2022, 6:56 PMJeremy Fisher
06/27/2022, 6:59 PMJeremy Fisher
06/27/2022, 7:00 PMJeremy Fisher
06/27/2022, 10:45 PMJeremy Fisher
06/27/2022, 10:46 PM