After my failed attempt with filesystem storage ab...
# announcements
After my failed attempt with filesystem storage above 😃 I tried another option - to deploy with local self-hosted minio s3-like storage, but stucked at providing credentials. JFYI, I'll post it again - there is a bug in boto3 that makes impossible to connect to endpoint_url containing underscores (as I had) # I just renamed my containers so I just wanted to use s3_storage from dagster_aws and initialized it with my own s3 endpoint_url. It connected but failed pipeline execution with
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I tried to
dagster-aws init
inside container but it requires me to provide aws region which is obviously irrelevant to me
botocore.exceptions.NoRegionError: You must specify a region.
how can I provide s3 connection creds in this case?
I actually made it working by hard-patching my creds here: How should I make it cleaner?
I tried with
but no luck
oh, yes, I got it finally just passing them through env works
👍 1
Hi @matas May I ask, how did you specify the correct endpoint? As it’s not s3 I don’t need a region, but an endpoint something like: https://s3.noobaa.svc.cluster.local in our case. But if I set this as region-name. I get the following error when starting
dagster-aws init
which is obviously wrong: •
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "<>
Or didn’t you use the dagster-aws init for that?
I've just put it in my dagster.yaml like here and then passed the credentials through the env variables environment: AWS_ACCESS_KEY_ID: your-id-here AWS_SECRET_ACCESS_KEY: your-secret-here
👍 1