# ask-community

Manga Dhatrika

05/23/2022, 7:46 PM
Hi I am trying to setup a sensor, so whenever there is a failure in the job, rerun the same job with different offset, I am using the run_failure_sensor for one of the job, but when their is a failure it is not rerunning that same job, could someone help me on this?
dagster bot responded by community 2
🤖 1


05/23/2022, 8:17 PM
Hi, probably you forgot to activate the sensor, you can do it by going to your instance on dagit, or add
default_status = DefaultSensorStatus.RUNNING
in your sensor definition to have the default behaviour

Manga Dhatrika

05/23/2022, 8:29 PM
ok how could I pass the dynamic offset at each failure, what I am trying to achieve is, I have suppose 1000 records that I am extracting from the API in first run it extracted 502 records and failed I would like to pass in a parameter from the sensor where it will get me the records that was loaded means 502 and when it reruns it sends the offset 503 and extract from 503 records and load
instead of rerunning the whole job again
this is code I am using, it still did not rerun the job
job_selection=[extract_and_load_data],default_status = DefaultSensorStatus.RUNNING  # so the sensor only runs when this job fails
def rerun_db_fetch_sensor(context: RunFailureSensorContext):
# logic to get the rows to start on
yield RunRequest(
in dasgter daemon I see
2022-05-23 16:39:33 -0400 - dagster.daemon.SensorDaemon - INFO - Completed a reaction request for run 0b6ec3dd-7095-4664-8711-2ed2168108d2: Sensor "rerun_db_fetch_sensor" acted on run status FAILURE of run 0b6ec3dd-7095-4664-8711-2ed2168108d2.
I am new to dagster so I dont know what I am doing wrong


05/25/2022, 4:48 PM
Hi Manga, Sorry for the late response here, give me a minute to look into this for you
So it turns out that
currently does not support yielding
like other sensors. We are aware of this issue and that our docs are confusing, and have an issue for it: You can expect this to be corrected in the next few weeks.