Hi, is `run_job_and_poll` the best approach to tri...
# integration-dbt
q
Hi, is
run_job_and_poll
the best approach to trigger dbt cloud jobs from dagster? I am using this but I notice that sometimes the sensor polls fail and the multiprocess executor gets this interruption error
Copy code
dagster._core.errors.DagsterExecutionInterruptedError
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/execute_plan.py", line 273, in dagster_event_sequence_for_step
    for step_event in check.generator(step_events):
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/execute_step.py", line 369, in core_dagster_event_sequence_for_step
    for user_event in check.generator(
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/execute_step.py", line 90, in _step_output_error_checked_user_event_sequence
    for user_event in user_event_sequence:
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/compute.py", line 192, in execute_core_compute
    for step_output in _yield_compute_results(step_context, inputs, compute_fn):
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/compute.py", line 161, in _yield_compute_results
    for event in iterate_with_context(
  File "/usr/local/lib/python3.11/site-packages/dagster/_utils/__init__.py", line 445, in iterate_with_context
    next_output = next(iterator)
                  ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/compute_generator.py", line 124, in _coerce_op_compute_fn_to_iterator
    result = invoke_compute_fn(
             ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/dagster/_core/execution/plan/compute_generator.py", line 118, in invoke_compute_fn
    return fn(context, **args_to_pass) if context_arg_provided else fn(**args_to_pass)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/dagster/app/analytics/assets/dbt_cloud_build_prod_data.py", line 68, in run_dbt_tests
    dbt_output = dbt.get_dbt_client().run_job_and_poll(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/dagster_dbt/cloud/resources.py", line 526, in run_job_and_poll
    final_run_details = self.poll_run(
                        ^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/dagster_dbt/cloud/resources.py", line 481, in poll_run
    time.sleep(poll_interval)
  File "/usr/local/lib/python3.11/site-packages/dagster/_utils/interrupts.py", line 82, in _new_signal_handler
    raise error_cls()
r
Yes, this is the best approach. It’s just calling the dbt Cloud API in the background. This seems more like an error due to insufficient resource allocation to the process that’s executing the Dagster job.
q
Interesting. I just increased the resource allocation this week. This is what I have on the job
Copy code
tags={
            "dagster-k8s/config": {
                "container_config": {
                    "resources": {
                        "limits": {"memory": "7Gi"},
                        "requests": {"cpu": "1000m", "memory": "5Gi"},
                    }
                }
            }
        }
Shouldn't this be enough to run a dbt cloud job that finishes in about 10 mins?
r
yeah that’s definitely more than enough if you’re just polling for dbt Cloud in your job. Are you doing any other processing in your job?
q
Nope. And usually that's the only job that runs because other processes wait on the successful completion of this job as a trigger
@rex Could the insufficient resource allocation be from the agent then? Because that only has 2 gigs I think