https://dagster.io/ logo
#ask-community
Title
# ask-community
u

黃柏璁

05/19/2023, 2:17 PM
Hi everyone, is it possible to re-execute a job in an op?
j

jamie

05/19/2023, 5:17 PM
Hey @黃柏璁 that’s a bit of an anti-pattern. What are you trying to achieve? there is likely another method to do a similar thing
u

黃柏璁

05/21/2023, 1:51 AM
Hi @jamie, for some failed jobs, I’d like to re-execute them with different run_config and customized assets selection like ++asset_a which is a failed step.
j

jamie

05/22/2023, 1:42 PM
In that case, I think a
run_failure_sensor
would be a good fit for you https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#run-failure-sensor . You can then customize config and use
asset_selection
parameter on
RunRequest
to achieve what you’re looking for https://docs.dagster.io/_apidocs/schedules-sensors#dagster.RunRequest
u

黃柏璁

05/24/2023, 3:48 PM
Hi @jamie, is it possible to link the new job to failure job?like to set root_run_id or parent_run_id?In my case, we have lots of job to be executed, and it’s hard to check every failure jobs having a success retry job if there is not a link between them.
j

jamie

05/24/2023, 3:55 PM
I think the best you could do is set tags on the launched run. I’m not seeing another way to directly set a root_run_id or parent_run_id
u

黃柏璁

05/24/2023, 3:57 PM
Got it, thanks
2 Views