Hi all! We're currently using running our `dbt` st...
# integration-dbt
e
Hi all! We're currently using running our
dbt
step as part of our pipeline as a downstream step to a few
airbyte
syncs. Every once in a while, we encounter a situation where let's say one
airbyte op
fails, but others succeed. In such case, the
dbt op
doesn't start at all, even though not all the tables generated by
dbt
would be affected. Is there some simple way to mitigate this? Or will we need to code something custom for this? We use
define_asset_job
with custom schedule here. I can see that the discussions of splitting
dbt op
into multiple
ops
per
assets
happened in the past (like, https://github.com/dagster-io/dagster/issues/12070). But it was agreed to not implement this.
1
a
You can split dbt into multiple jobs/ops yourself if you need it.
plus1 1