Ah ok Scala is a different topic, I’m not sure how to approach this best, curious to hear the Dagster teams thoughs on this.
But just to be sure, with my examples, or with Dagster in general, you can write your resource (e.g. I’m using pyspark
here) and then you can use spark simply by `context.resources.pyspark.spark_session.read.json(s3_path)`e.g. in all your solids, which is a pretty powerful thing. Rather than submitting a spark-submit all the time. The connection details you define once as part of your environment (dev, test, prod) YAML-configs. But yeah if that is a necessity then thats a whole other thing.