Emilja Dankevičiūtė
07/12/2023, 12:10 PMdbt
step as part of our pipeline as a downstream step to a few airbyte
syncs. Every once in a while, we encounter a situation where let's say one airbyte op
fails, but others succeed. In such case, the dbt op
doesn't start at all, even though not all the tables generated by dbt
would be affected. Is there some simple way to mitigate this? Or will we need to code something custom for this? We use define_asset_job
with custom schedule here.
I can see that the discussions of splitting dbt op
into multiple ops
per assets
happened in the past (like, https://github.com/dagster-io/dagster/issues/12070). But it was agreed to not implement this.Adam Bloom
07/12/2023, 12:48 PM