Hey folks, I had a play recently with dagster and ...
# announcements
j
Hey folks, I had a play recently with dagster and really like what you're doing - it definitely fills a gap in the python ecosystem! I have a question regarding your plan for expectations though. Currently it's quite hard to tell if an expectation fails (i.e. it logs at level INFO and in dagit you have to click on the the little arrow on the
<solid>.compute
box on the right of the runs window before you can see that one of the expectations failed for that solid). Do you have any plans to make, say, an aggregated report of failed expectations? Even more than this, we have a computation graph where if an expectation fails then we don't want to propagate the results of that solid downstream because they might just be bogus. The risk here is that we end up automatically showing results to the user which are incorrect, and we'd obviously prefer to avoid this! What is you recommendation for this situation? Are expectations the right way to go? Or should we just be raising
Failure
instead? One thing I tried was to apply a decorator to the method before the
@solid
decorator, which "listens" for
ExpectationResult
events and does a
raise Failure
if it sees one which failed. Have you thought about putting something which signals to the runner to raise on failed expectations in the
ExpectationResult
class?