Hi team, I am thinking about the backfill implemen...
# ask-community
a
Hi team, I am thinking about the backfill implementations. Currently all my sensors are looking for the latest asset key (generated by upstream deps) for the current partition and create Run Requests if it identifies a new run key which is not the same as the last run key in the cursor. But it looks like while running backfills on the upstream pipelines, these downstream sensors might not pick up the assets and I might have to trigger backfills on downstream pipelines separately. I am currently planning to remove the partition filter in the sensors while calling
events_for_asset_key
, but I am still not sure how to get the partition info from the Asset key to pass it in the run request. The returned
EventLogEntry
does not have the partition info. Is there any other method that I can use?
p
Hi Arun…. The asset partition information is available on the
AssetMaterialization
itself. You should be able to access it like this:
Copy code
events = instance.events_for_asset_key(my_asset_key, limit=1)
if not events:
    return
record_id, event = events[0]
materialization = event.dagster_event.step_materialization_data.materialization
partition = materialization.partition
I should also note that as of
0.12.0
the instance method
events_for_asset_key
is deprecated in favor of
get_event_records
. Using this new API, the above code would look like this:
Copy code
from dagster import DagsterEventType, EventRecordsFilter

records = instance.get_event_records(
    EventRecordsFilter(event_type=DagsterEventType.ASSET_MATERIALIZATION, asset_key=my_asset_key),
    limit=1
)
if not records:
    return
event = records[0].event_log_entry
materialization = event.dagster_event.step_materialization_data.materialization
partition = materialization.partition
a
Thanks Phil. Yeah, I saw the deprecation note in the change log and will migrate to
get_event_records
and thanks again for sharing the code for get_event_records.