Was wondering if I could get some support on how to best structure a Dagster project. My goal is to have a pipeline that takes data from an API, applies some transformations using pandas, and saves the result to a postgresql DB. I want to apply this pipeline to about 50 different “tickers - i.e. SPY, TLT, etc.“. Ideally, I’d like to avoid using a loop and iterate through the tickers within the pipeline, so that if one ticker fails I can manually re-apply the pipeline to just that one ticker.
I have two questions:
a) once I’ve built the pipeline, how can I feed it N ticker values, so that it runs identically for each? Also, could I do this in parallel? That’d be nice!
b) I’m a bit confused how to add Postgres for actually saving the data to db. Is it adding another asset? Using the IO Manager? An example / link to github would be great.
Thanks! 🙏