https://dagster.io/ logo
#ask-community
Title
# ask-community
c

Charlie Bini

02/10/2023, 6:13 PM
I'm seeing my asset reconciliation sensor creating duplicate runs for the same asset. I'm thinking that while the initial run is going, the upstream asset completes and then while the downstream asset op is initializing, the sensor ticks again and creates another reconciliation run for that asset. Does that sound plausible?
👀 1
The workaround in my mind is that I should increase the duration between sensor ticks
these are dbt assets btw
created using
load_assets_from_dbt_manifest
o

owen

02/10/2023, 6:36 PM
@Charlie Bini It's definitely possible, although the sensor should take into account in-progress runs when determining if it will kick off a new run. Are you using freshness policies here? if so, are they on the dbt assets, or the downstream asset(s) (or both)?
c

Charlie Bini

02/10/2023, 6:37 PM
not using freshness policies
👍 1
think I have a good example where the log timestamps indicate that the latter run started before the initial run
the
load_assets_from_dbt_manifest
assets don't have some kind of magic run launcher right? they still rely on something like a sensor to detect that they're stale and initiate a run?
o

owen

02/10/2023, 6:39 PM
yep no magic there
c

Charlie Bini

02/10/2023, 6:39 PM
ok so I'm not duplicating them with my sensor at least
o

owen

02/13/2023, 11:43 PM
hey just checking back in on this! were you able to find those logs?
c

Charlie Bini

02/15/2023, 7:59 PM
Hey @owen! thanks for following up. I was out the past couple days. Here are the logs for 2 runs of the same asset and partition from the same sensor
the asset key in question is
kippnewark / dbt / powerschool / stg_powerschool__assignmentscore
run 0b14301f is enqueued at 101136 for both upstream and downstream assets
run 4852c503 is enqueued at 101419 for the duplicate run
that asset's step worker starts at 101353 in the original run but execution doesn't start until 101419
I thought they might be just duplicate logs looking at the timestamps but they're running in different pods
my thought is that any checks are looking only for running materializations whereas it should be looking for enqueued materializations
@sandy here's the details for the issue I mentioned in our DM
o

owen

02/17/2023, 12:24 AM
ah @Charlie Bini, I looked into this and you're 100% correct on this. We do not factor in runs that are in the QUEUED state when determining the state of a given asset. There is a bit of risk here, as runs can sometimes get stuck in the QUEUED state, but I think we can filter out old runs manually, and this should be a fairly simple fix. Thanks for the report 🙂
just merged in a fix for this, it'll be in next week's release
🤘 1
2 Views