Hello everyone ! (cc <@U051XHJH4NM>) I want to de...
# ask-community
Hello everyone ! (cc @Justin Albinet) I want to deploy pipelines which use GCP Bucket as io manager, but from what I understood, in order for this to work I need to set the variable
to the path of a credentials.json file (which contains the credentials informations of a service account with correct permission to read/write the specified bucket) As I am deploying the solution using Kubernetes (GKE), is there a way to avoid uploading a credentials.json file inside the user-deployments images I am deploying into Kubernetes ? (like using workload identity for example)
🤖 1
This is more of a GCP question than a Dagster question. GKE's intended route for this is that you use k8s Secrets that can securely mount the service account json keys into your pods' filesystems https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#importing_credentials_as_a_secret
that step and the following step show how to do this
🌈 1
Ok thank you, my question was more “Is there a way to use workload identity instead of specifying a .json credential file inside the pod” But I think I know how to manage it now, thank you 😄
And the secret credential file should be mounted inside every user-deployments pods which uses it ? Because I am using the helm chart (with some personal additions) to deploy dagster, and I don’t see an option to mount secret as volumes on the user deployments
It depends what kind of executor you're using since ultimately it's the executor that needs access to its IO Managers
🙏 1