If I may jump in here, we have both above mentioned requirement such as re-execution and parallelism. That's one reason we have separate jobs that are connected together with sensors. Another is because each job has different granularity. Think of one Job collect a massive zip and the outputs are 1 to x files that are spawned n following jobs.
It's works kind of well as we can run lot's of jobs (1000s of them) in parallel. But we lose pretty much all overview and re-execution get's harder too, as we can only re-execute one job, but not the whole pipeline (which are all jobs together) again. If you understand what I mean, is there a way to have a "master" DAG for the overview but still be able to run different granularities?
I have a feeling separation of jobs and SW defined assets could help, but I guess also dynamic DAGs would be heavily needed, which makes it more harder if one one file would error out, that all other jobs would haven a status error as well. Were in the current approach each file has it's own job. Not sure if that makes sense. But I loved how you @marcos presented in the community meeting one DAG with dynamic ops.