Hello, I got this error so many times ever since I...
# integration-dbt
q
Hello, I got this error so many times ever since I started using
load_assets_from_dbt_cloud_job
and it affected my schedules. Deployment errors because of this
Copy code
dagster._core.definitions.events.Failure: Exceeded max number of retries.
  File "/usr/local/lib/python3.8/site-packages/dagster/_grpc/server.py", line 242, in __init__
    self._loaded_repositories: Optional[LoadedRepositories] = LoadedRepositories(
  File "/usr/local/lib/python3.8/site-packages/dagster/_grpc/server.py", line 120, in __init__
    repo_def = recon_repo.get_definition()
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/reconstruct.py", line 117, in get_definition
    return repository_def_from_pointer(self.pointer, self.repository_load_data)
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/reconstruct.py", line 787, in repository_def_from_pointer
    repo_def = repository_def_from_target_def(target, repository_load_data)
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/reconstruct.py", line 776, in repository_def_from_target_def
    return target.compute_repository_definition()
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/repository_definition.py", line 1548, in compute_repository_definition
    repository_load_data = self._compute_repository_load_data()
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/repository_definition.py", line 1496, in _compute_repository_load_data
    cached_data_by_key={
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/repository_definition.py", line 1497, in <dictcomp>
    defn.unique_id: defn.compute_cacheable_data()
  File "/usr/local/lib/python3.8/site-packages/dagster/_core/definitions/cacheable_assets.py", line 171, in compute_cacheable_data
    return self._wrapped.compute_cacheable_data()
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/asset_defs.py", line 80, in compute_cacheable_data
    dbt_nodes, dbt_dependencies = self._get_dbt_nodes_and_dependencies()
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/asset_defs.py", line 179, in _get_dbt_nodes_and_dependencies
    compile_run_dbt_output = self._dbt_cloud.run_job_and_poll(
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/resources.py", line 490, in run_job_and_poll
    final_run_details = self.poll_run(
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/resources.py", line 451, in poll_run
    self.cancel_run(run_id)
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/resources.py", line 296, in cancel_run
    return self.make_request("POST", f"{self._account_id}/runs/{run_id}/cancel/")
  File "/usr/local/lib/python3.8/site-packages/dagster_dbt/cloud/resources.py", line 129, in make_request
    raise Failure("Exceeded max number of retries.")
I see in the new release that Does this fix these errors?
r
we attempt to cancel in-progress dbt Cloud jobs in the Dagster framework if the process that executes the run is prematurely killed, or if a timeout is reached and the dbt Cloud job is still in progress. Did one of these things happen to you?
Are you seeing this error consistently?
q
@rex, I see this so many times. When it happens, I have to redeploy before I get my jobs to run. So if there are any runs in within that period and I don't see this earlier, I miss them. I feel like even jobs that don't materialize dbt cloud assets compile the dbt assets via the api before it runs.
r
Yeah sorry about that, that’s an unacceptable state for your deployment. Are you only loading one dbt cloud job in your Dagster deployment? I’m trying to figure out if you’re encountering some problems with scaling dbt Cloud and Dagster: • If you have multiple dbt Cloud jobs running at the same time, dbt Cloud has a concurrency limit. So if you have an existing dbt Cloud job running while you’re trying to deploy Dagster, your dbt Cloud compilation may end up in the queue due to the concurrency limit, which may cause the deployment to time out. • The release that you mentioned has some experimental features that do dbt Cloud compilation when your dbt Project changes - we cache this compilation using a github action that triggers only when your dbt cloud project changes (the reason being is that the compilation should only change if your model code changes)
If you want to check out our experimental feature to cache dbt Cloud compilation, you can check out https://docs.dagster.io/master/integrations/dbt-cloud#step-4-cache-the-dbt-cloud-job-compilation
This way, we won’t compile during load time