Igor
09/04/2022, 6:12 PM@job
def first():
pass
@job
def second():
pass
@repository
def repository():
return [first, second]
Tamas Foldi
09/05/2022, 5:21 AMIgor
09/05/2022, 11:00 AM@run_status_sensor(
run_status=DagsterRunStatus.SUCCESS,
request_job=first,
)
def sensor_job_second(context):
???
How to run job?Stephen Bailey
09/05/2022, 5:56 PM@run_status_sensor(
run_status=DagsterRunStatus.SUCCESS,
request_job=first,
)
def sensor_job_second(context):
config = {}
yield RunRequest(run_config=config)
you can also set a run_key
if you want to make sure that the first
job does not spin up second
more often than some period. for example, if you wanted at most one run per day, you could set run_key=datetime.datetime.now().strfmt("%Y-%m-%d)
Igor
09/06/2022, 12:05 PMStephen Bailey
09/06/2022, 12:34 PMmy_job.execute()
or something like that. but a sensor doesn't say "run this job", it says, "hey dagster, put this job on the queue", or you may have it say, "hey dagster, for each of these N records, put a job on the queue", which is why it's the yield RunRequest
syntax. having a sensor do any serious compute or direct scheduling is definitely an anti-pattern.Igor
09/06/2022, 4:34 PMconfig = job_second
Stephen Bailey
09/06/2022, 4:49 PM@sensor(job=job_second)
is the place where you define which job you are going to create when the sensor trips. the `config` is the settings you want for that particular, for example in the docs linked above:
yield RunRequest(
run_key=filename,
run_config={
"ops": {"process_file": {"config": {"filename": filename}}}
},
)
Igor
09/06/2022, 5:30 PM@asset_sensor(asset_key=AssetKey('first_done'), job=second)
def sensor_second(context, asset_event):
return RunRequest(run_key=None)
And put the key = first_done
in first job