Hi all,
I'm looking at implementing a data pipeline at work and Dagster seems really cool. One thing we would like to be able to do is to publish a data artifact from 1 workflow and then use that in another workflow.
e.g.
Write an import worker that reads from s3 and publishes a dataset
Multiple pipelines read from this dataset (and others) and rerun when one changes, without needing to care about the import step.
Ideally, these pipelines could be defined separately, rather than in one big file
Is that the sort of model that dagster supports (I know there are materialized views) or will I be fighting it all the time?