Hey all! I’m trying to use a very basic `run_statu...
# ask-community
s
Hey all! I’m trying to use a very basic
run_status_sensor
, that keeps failing with GRPC
details = "Deadline Exceeded"
errors. I’m aware that sensors usually have a 60s timeout, so I suspect the cause is that the underlying
instance.get_event_records
query takes too long. I verified that’s the case by running this, and indeed, it takes around 3mins:
Copy code
import dagster
import time
from dagster import EventRecordsFilter, DagsterEventType

instance = dagster.DagsterInstance.get()

run_failure_records = instance.get_event_records(
    EventRecordsFilter(
        event_type=DagsterEventType.RUN_FAILURE,
        after_timestamp=time.time() - 86400,
    ),
    ascending=False,
    limit=1,
)
Am I right that this is likely caused by us having a lot of pipelines or something(?) Any suggestions on how to make that query faster running (e.g. limiting
get_event_records
to only search for records of a specific pipeline)? Or more generally, how we can get `run_status_sensor`s working again?
d
Hi Surya - what version of dagster are you using? There were some recent changes that would improve the performance of this particular query
s
0.13.19
@daniel 🙂
I also realised that query is much faster when you specify
after_cursor
This does the job for now inside a normal sensor, but is a little cumbersome since the
run_status_sensor
won’t work out of the box.
d
Yeah, if upgrading to the latest version is an option, I believe we made that query (even without the cursor) a lot more performant in 0.14
ty spinny 1