Hi ya'll. Have a partitioned asset question. (A co...
# ask-community
a
Hi ya'll. Have a partitioned asset question. (A couple maybe). So I just created my first daily partitioned asset that makes an api call with a start and end date (same day in the daily partition case) and writes the extracts both to csv and a raw layer table in duckdb. First question, it appears a couple of days ago 2 partitions failed. It appears that this data is missing from the API. (The data is shown on the sources site, but hitting the api with any tool for those two dates yield no results. Is there a way that I can clear these failures from the Dagster UI and manually write those records to duckdb + csv instead? Second question, is when materializing a daily partitioned asset backfill for say a year - if I "Launch as single run" with that make one api call for that 1 year range, then write each of the 365 partitions on the Dagster side? Or will it make 365 API calls? If the latter, is there a way to accomplish to former to minimize the number of calls that need to be made? Thanks!
image.png
c
Don't have the answer to your first question, but for your second, to avoid making 365 API calls, your asset needs to implement one of
asset_partitions_time_window
,
asset_partition_key_range
,
asset_partition_keys
,
asset_partitions_time_window_for_output
,
asset_partition_key_range_output
,
asset_partitions_time_window_for_input
, or
asset_partition_key_range_for_input
. I had a similar question yesterday: https://dagster.slack.com/archives/C01U954MEER/p1692899863924449
1
a
Thanks @Caelan Schneider. I'll have a look at those.
c
Hi Alex. There isn't a way to manually mark a partition as successful. You could create an op that yields an
AssetMaterialization
object for each asset partition you want to mark as successful as a workaround
a
Thanks @claire! I'll have a look.