Hello Dagster Experts, Is there way to retrieve c...
# ask-community
w
Hello Dagster Experts, Is there way to retrieve child run ids if parent/root id job has failed and retry with new run id (= child run id) through python graphql client? The motivation is that we want to launch certain job from date A to date C in order. Before run date B, we need to make sure date A job has finished successfully. We will get first run id of date A job through python client’s submit_job_execution and can monitor it’s status using get_run_status. But if that run has failed and spawned new run id, we don’t have a way to track date A’s job. We may try with set ‘maxretries’ to 0 and run submit_job_execution again if we get failures, but in that case we will run the whole dag of date A again, while auto retry in Dagster will only run failed tasks. Or can we leverage any other features I may not aware? for example, if we want to run the range of dates for company A and run sequentially, if Dagster has any capability to consider them as a backfill and run them in an order. If you have any better suggestion, please let me know! Thanks :)
o
Hi Wonjae - did this work with retries? Using the graphql client can be useful in this situation. If its partitioned data, the partitions should refresh with the upstream first and then subsequent if both assets are in a partitioned job