Hi everyone, is it possible to re-execute a job in...
# ask-community
u
Hi everyone, is it possible to re-execute a job in an op?
j
Hey @黃柏璁 that’s a bit of an anti-pattern. What are you trying to achieve? there is likely another method to do a similar thing
u
Hi @jamie, for some failed jobs, I’d like to re-execute them with different run_config and customized assets selection like ++asset_a which is a failed step.
j
In that case, I think a
run_failure_sensor
would be a good fit for you https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#run-failure-sensor . You can then customize config and use
asset_selection
parameter on
RunRequest
to achieve what you’re looking for https://docs.dagster.io/_apidocs/schedules-sensors#dagster.RunRequest
u
Hi @jamie, is it possible to link the new job to failure job?like to set root_run_id or parent_run_id?In my case, we have lots of job to be executed, and it’s hard to check every failure jobs having a success retry job if there is not a link between them.
j
I think the best you could do is set tags on the launched run. I’m not seeing another way to directly set a root_run_id or parent_run_id
u
Got it, thanks