Is there a best practice to migrate from loading a...
# integration-airbyte
Is there a best practice to migrate from loading assets using
? I tried to match all source/dest/connection settings, but still seeing
ValueError: Airbyte connections are not in sync with provided configuration
AFTER dagster-airbyte applying that reconciler.
Hi Dusty, that’s odd - are you able to see a diff running
dagster-airbyte check
Yea, I can run through the process again and document it, as it’s just our dev environment. I reverted it but can pick this back up after lunch.
One thing I did notice, is that when I turned off streams in Airbyte, the asset definitions weren’t reloading in Dagster and when I went to materialize those assets in Dagster it throws the
Copy code
op 'airbyte_sync_63c3b' did not fire outputs 

dagster._core.errors.DagsterStepOutputNotFoundError: Core compute for op "airbyte_sync_63c3b" did not return an output for non-optional output "foo"
a no-op diff for the
method alleviates it.
Starting the process now of loading assets from the
for a connection that already existed.
Looks ilke the above was caused by the assets not being reloaded unless I did a no-op deploy, but once I aligned the stream_config with the existing Airbyte asses, I get this in the deployment -->
Copy code
TypeError: reduce() of empty sequence with no initial value
Copy code
File "/usr/local/lib/python3.7/site-packages/dagster/_grpc/", line 245, in __init__
  File "/usr/local/lib/python3.7/site-packages/dagster/_grpc/", line 120, in __init__
    repo_def = recon_repo.get_definition()
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 117, in get_definition
    return repository_def_from_pointer(self.pointer, self.repository_load_data)
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 787, in repository_def_from_pointer
    repo_def = repository_def_from_target_def(target, repository_load_data)
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 776, in repository_def_from_target_def
    return target.compute_repository_definition()
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 1549, in compute_repository_definition
    return self._get_repository_definition(repository_load_data)
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 1529, in _get_repository_definition
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 857, in from_list
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 159, in resolve
    asset_selection=self.selection.resolve([*assets, *source_assets]),
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 157, in resolve
    return self.resolve_inner(asset_graph)
  File "/usr/local/lib/python3.7/site-packages/dagster/_core/definitions/", line 230, in resolve_inner
    for asset_key in selection
Hmm, it looks like the asset job is resolving no assets. What’s the asset job that’s triggering this?
The definition is
Copy code
airbyte_config_assets = load_assets_from_connections(
is there a job in your repository that relies on these assets?
I’m wondering if maybe the asset names or prefix are mismatched and it’s causing a job to resolve as empty
Good thought - let me check
We do define an asset job that has a group name value that matches this line of the previous
Copy code
#     connection_to_group_fn=lambda group_name: "payments_backend_replication",
It would seem that it’s defining an asset job based on that
group_name value eh?
Yup, I think you’ll want
on the
call too? I think what’s happening is that job is not finding the new assets bc they’re missing the group name
Good catch, let me add that in and take it do dev
Looks like that did the trick, thank you Ben.
I think my original issue had to do with the asset reloading which was alleviated by the no-op’d the asset definition
Yeah, that’s very odd, something another user encountered recently
We are hoping to figure out why that data seems to be cached even after reload
Yea, probably my colleague, Guy. We’re just causing all sorts of problems. I’ll leave y’all alone for the rest of the year. Happy holidays!
😶 1
thank you for the detailed write-up! this should make it easier for us to try to replicate on our end 🙏 happy holidays!
Of course. I’m wondering if it related to our infrastructure/K8s, but I’ll try not to think about it until after Xmas!
FWIW, our instance of Dagster is deployed on K8s via Helm Chart
@ben would it make sense for me to open an issue for the above?
That would be great, I will have some time to try to replicate this today+tomorrow
sounds good - should be able to write it up tonight
Opened the issue here. Happy to help provide any other info needed to reproduce.