Hey, if I am deploying dagster with Helm and I wan...
# deployment-kubernetes
Hey, if I am deploying dagster with Helm and I want to use the AWS S3 instead of minio, do I specify the bucket somewhere? In the example https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#step-4-set-up-amazon-s3
is only mentioned once when setting up with minio, the AWS S3 suggests that only credentials will be enough to make it work, but I am guessing Helm somehow needs to know the name of the bucket in both cases.
It seems that the tutorial assumes that user defined the code to use that test bucket, is there an option to set IO manager globally for all jobs?
AFAIK you can specify a default IOManager for all assets ( https://docs.dagster.io/concepts/io-management/io-managers#applying-io-managers-to-assets ), but you can’t set a default for all jobs. You will have to specify it for every job/op. Eventually, you can create a utility function to do that for you.
Hi Szymon, the
API will in the future (1.3 onward) bind resources from
to your jobs, including the default IO manager. In 1.2.x, you’re able to do so by wrapping your jobs in the
Copy code
defs = Definitions(
    job=BindResourcesToJobs([my_job_1, my_job_2]),
        "io_manager": my_custom_default_io_manager
Thank you for the input ben! Good to know
@Andrea Giardini @ben Other than setting up the S3 IO manager - is there anything else I need to be able to run my jobs in separate pods?