Hi team, I am running dagster in an AWS kubernetes...
# ask-community
a
Hi team, I am running dagster in an AWS kubernetes cluster, saving assets to s3. Is there a way for me to load these assets from within my cluster (e.g. in a pod in the same cluster running jupyter notebook) - I can do this locally (if I am running dagster locally) by doing
defs.load_asset_value(AssetKey(
... - but can I do this so I hit the S3 bucket for loading assets? If I do this in the cluster, it still tries to load the asset from "local file system" and I get the error
Copy code
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpxsjt2zsd/storage/eta_deltas_third_party/2023-06-01'
How can I tell it to look into S3? I am setting my io_manager to be
ConfigurablePickledObjectS3IOManager
- anything else I may be missing? Thanks!
y
My memory is not as crisp on this anymore since we now run in ECS instead of EKS. But as I recall the secret sauce for that was this (from my git history 😉 )
Copy code
## Add AWS EBS Driver
eksctl create iamserviceaccount \ --name ebs-csi-controller-sa \ --namespace default \ --cluster ${CLUSTER_NAME} \ --attach-policy-arn arnawsiam:awspolicy/service-role/AmazonEBSCSIDriverPolicy \ --approve \ --role-only \ --role-name AmazonEKS_EBS_CSI_DriverRole eksctl create addon --name aws-ebs-csi-driver --cluster ${CLUSTER_NAME} \ --service-account-role-arn arnawsiam::${AWS_ACCOUNT_ID}:role/AmazonEKS_EBS_CSI_DriverRole --force ``````
As I recall that was needed to access the S3 bucket in the way you say above