Another problem: I read the "sensor" and "hook" in...
# ask-community
w
Another problem: I read the "sensor" and "hook" in Docs. I don't know how to get the pipeline input or config from a faliure solid?
y
hi @wangm23456 if this is for a solid hook, you can get the the config of the failed solid via
context.solid_config
https://docs.dagster.io/_apidocs/execution#dagster.HookContext.solid_config
w
The
context.solid_config
has only the failed solid's config?
y
Yes. Could you elaborate on the “pipeline input” - what did you mean by it?
w
Can I get the pipeline's run_config in this run from this solid?
for example _The 
run_config
 used by 
execute_pipeline()
 and 
execute_pipeline_iterator()
 has the following schema:_
y
Technically you could but we don’t expose the entire run_config as a public property on the HookContext. This is how we construct the `context.solid_config`https://github.com/dagster-io/dagster/blob/master/python_modules/dagster/dagster/core/execution/context/system.py#L620
If you want to set some “global config” on a pipeline and have your hook code access it, you can model it as a resource (recommended) and access it via
context.resources.<resource_key>
w
ok
2. Can I get the _`run_config`_ from
run_id
like in dagit UI?
y
did you mean getting the
run_config
used in a historical run based on a given
run_id
?
w
yes
y
in Dagit UI, you can go to the Runs page (i.e. /runs) and find a “View Configuration” option by clicking the button on the right of each run
w
any api in solid or hook or sensor?
y
in sensor and solid, you can it by querying the persistent PipelineRun from the run storage:
Copy code
pipeline_run = context.instance.get_run_by_id(run_id)
run_config = pipeline_run.run_config
w
thank you❤️
d
Hi, I was looking into monitoring past pipelines and stumbled upon this thread. Is there a way to get past run ids from the run storage? Ie create a solid that loops through all the past pipeline run ids on a dagit instance and does something with them. Are these stored within a storage loader resource we could access somehow?
y
Hi @Dylan Bienstock we have a sensor example to do something similar: https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#pipeline-failure-sensor where it queries the run storage for failed runs
i.e.
context.instance.get_runs
you can access that via solids context too.
d
@yuhan Thanks! This is exactly what I am looking for
y
If your primary use case is monitoring, I’d recommend using Sensors rather than Solids as sensors are the mechanism built for use cases like monitoring.
We are currently working on first-class support for the monitoring use cases which will reduce the boilerplates you have to write in that example. Will keep you posted!
1
Here’s the tracking issue https://github.com/dagster-io/dagster/issues/3613 and feel free to comment if you have other use cases 🙂
❤️ 1
d
I was considering using the sensors and taking advantage of the success/ failure hook. Our primary goal is to have a centralized way to, once in the morning, view the success/failure of our pipelines that ran throughout the previous day (potentially printing this data to slack). I appreciate the help
Although it might be best to have our pipelines print when the succeed/fail throughout the day and then just check this in the morning
y
Got it. To your primary goal, the example on the docs should help most part of it. The difference is sensors are a long running process that will just listen to the event/run storage throughout the day. If you’d like to have it scheduled every morning, you can go with the solid approach and use
@daily_schedule
for it (see Schedules). But either way, the
context.instance
method can get you there 🙂
🙌 1
d
Sounds good. This is exactly what I need.