https://dagster.io/ logo
#ask-community
Title
# ask-community
d

Denis Arkhipov

04/27/2023, 7:13 AM
Hi team! I have an issue with
run_status_sensor
which I use for triggering the job based on
DagsterRunStatus
of another job (which is scheduled every 2 minutes). So I expected that the triggered job will be running almost every 2 minutes but I got it's running every ~17 minutes and the sensor is skipping before the start. I use the 1.1.7 Dagster version in prod.
@run_status_sensor(
run_status=DagsterRunStatus.STARTED,
minimum_interval_seconds=30,
name="enprompt_netareports_run_status_sensor",
description="""
A run status sensor, which requests an enprompt netareports data fetching job
while the related preprocessing workflow is running.
""",
monitored_jobs=[enappsys_live_preprocessing_job],
default_status=DefaultSensorStatus.RUNNING
if RUN_ENV == "prod"
else DefaultSensorStatus.STOPPED,
request_job=enprompt_netareports_job,
)
def enprompt_netareports_sensor(context):
<http://context.log.info|context.log.info>("LIVE Power data fetching started")
uts = int(time.time())
for chart in ENPROMPT_CHARTS:
yield RunRequest(
run_key=f"enprompt_netareports_{chart}_{uts}",
run_config=enprompt_netareports_job_config(chart)
)
Any ideas why it's skipping and then suddenly starts?
c

chris

04/27/2023, 5:21 PM
Is the triggering job actually happening every two minutes successfully? Is the run status sensor only running once every 17 minutes and then skipping? Or is it just not yielding a run request? My hunch is that the run key is blocking the run requests to only happen once every 17 minutes. Wondering if you got rid of the run key what happens
2 Views