Thank you for your reply. So basically if I understand correctly, I need to process whole set of data in one solid then pass it to another, and so on. However I see people are using Dagster with Celery and other frameworks to execute solids in parallel and I actually consider this as the biggest advantage of using Dagster instead of simply chaining regular functions. If there is no way to chunk big data and process it in parallel, is the use case everyone is using for parallel processing actually doing more different things on the same set of data at the same time instead of making big data input process faster by splitting it up? If I for example first prepared the file and made 50 files out of one big file, could I run in parallel 50 instances of the same pipeline each executing different file? I ask because in our case executing different tasks in parallel would not help to speed stuff so much, instead chunking data and process each chunk in parallel is what would make us profit in terms of speed.