Hi team, we are facing below error when every time...
# ask-community
y
Hi team, we are facing below error when every time we do new code deployment and try to materialize downstream asset only. Can anyone help us out? Thanks!
s
I believe what's happening here is that your asset is being stored on the local disk of the machine that it's running on, but, when you run redeploy and run the downstream asset, it's on a different machine. You can solve this by using an IO manager that stores data in the Cloud instead of the local FS - e.g.
s3_pickle_io_manager
y
Hi @sandy, that makes a lot of sense! Thanks! Can you give us an example about how to set that up? Especially where to pass s3 creds. Thanks!
s
There are some code examples here: https://docs.dagster.io/_apidocs/libraries/dagster-aws#dagster_aws.s3.s3_pickle_io_manager By default, creds in
~/.aws/credentials
will be used
❤️ 1