Hey stranger!
I'd think of it as two assets: asset one being how to make the parquet file and the other as how it's written to snowflake.
One way to look at is that these type of assets represent how the data is handed off between locations, in this case S3 and Snowflake.
---ignore everything below here. possible risk of over-complicating the mental model---
That's one way to view it. There's nuance to this though. If you want more robustness over the compute engine -> S3 step, then two separate assets for the local Parquet file and another for the file in S3 would work. It really depends on your needs, ex. if writing to S3 fails sometimes, then splitting it into two and adding a
RetryPolicy
to the S3 would you let try re-uploading without needing to do the extract, too.
tl;dr: typically just the S3 and Snowflake asset case would work.