I recently had the case of a source system deliver...
# ask-community
g
I recently had the case of a source system delivering bad data: Only a small number of the expected rows was actually made available. Assuming that dagster has an asset observation for the key
record_count
a) how could I easily fail the materialization of this asset if way less than the expected records arrive by looking at previous asset materializations b) I am aware dagster is thinking about data quality checks. How could these be implemented/support/ease the implementation of such stateful record count checks?
👀 1
j
Hi @geoHeil for a) one pretty simple option would be to throw an exception in the asset if the row count is too low. you could query the dagster instance for the old asset observations and compare. We don’t have an exact example of querying for Asset observations, but the sensor in this example shows querying for an asset materialization. you could replace it with a request for ASSET_OBSERVATIONS
g
I know how to potentially do this. But would love to see some more native features for this use case (stateful asset data quality checks)
👀 1
j
for sure. I know @sandy is thinking about quality checks so tagging him here to see this. in the meantime if you want some help setting up the non-native version of this lmk and we can walk through it