We're leaning really heavily into an event-driven ...
# dagster-feedback
We're leaning really heavily into an event-driven architecture by triggering pipelines off of
, and
events. One of the things we've encountered is that it's challenging to test these sensors (known problem), but also to trigger all of the downstream events in the case where we need to manually kick things off. We've created a
job that takes in an arbitrary asset key and generates an event for it, and it's been helpful for testing and other purposes. This might be a nice thing for Dagster to provide out of the box (attached to sensors, maybe?).
this is something we’ve been thinking about with regards to a sensor testing UI, what do you do for event triggered sensors (ie run failure, asset reconciliation). Do you essentially want the ability to run the sensor ad-hoc?
I think the challenge with testing an individual sensor is one of the problems, and probably the first/main one. But we are now in a world where, if
is materialized, then 5 or more sensors get triggered. Having to go to each of those sensors and trigger them defeats the point of the sensor in the first place -- the best remediation is to rematerialize the asset and let downstream processes pick up. I'm thinking more of a flow where you "Mark as Materialized" and you could manually (or programmatically) generate a materialization event. Use cases include: • marking an asset as materialized when a failure happens, and a manual fix has been taken. • marking when the bulk of computation has been done, and something non-essential fails in the op (I have hit this with adding
, for example) • allowing an external system to mark an asset as materialized (import / push events) in any case, i do think the focus is not on the sensor, but on the ability to manage the dagster event stream more directly
cc @daniel @alex who might also be interested in this particular use case