Charlie Bini
04/13/2023, 7:59 PMCharlie Bini
04/13/2023, 7:59 PMdagster._core.errors.DagsterUserCodeUnreachableError: dagster._core.errors.DagsterUserCodeUnreachableError: The sensor tick timed out due to taking longer than 60 seconds to execute the sensor function. One way to avoid this error is to break up the sensor work into chunks, using cursors to let subsequent sensor calls pick up where the previous call left off.
Stack Trace:
File "/dagster-cloud/dagster_cloud/agent/dagster_cloud_agent.py", line 807, in _process_api_request
api_result = self._handle_api_request(
File "/dagster-cloud/dagster_cloud/agent/dagster_cloud_agent.py", line 665, in _handle_api_request
serialized_sensor_data_or_error = client.external_sensor_execution(
File "/dagster/dagster/_grpc/client.py", line 388, in external_sensor_execution
chunks = list(
File "/dagster/dagster/_grpc/client.py", line 184, in _streaming_query
self._raise_grpc_exception(
File "/dagster/dagster/_grpc/client.py", line 135, in _raise_grpc_exception
raise DagsterUserCodeUnreachableError(
The above exception was caused by the following exception:
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.DEADLINE_EXCEEDED
details = "Deadline Exceeded"
debug_error_string = "{"created":"@1681415679.819259821","description":"Deadline Exceeded","file":"src/core/ext/filters/deadline/deadline_filter.cc","file_line":81,"grpc_status":4}"
>
Stack Trace:
File "/dagster/dagster/_grpc/client.py", line 180, in _streaming_query
yield from self._get_streaming_response(
File "/dagster/dagster/_grpc/client.py", line 169, in _get_streaming_response
yield from getattr(stub, method)(request, metadata=self._metadata, timeout=timeout)
File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 426, in __next__
return self._next()
File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 826, in _next
raise self
File "/dagster/dagster/_daemon/sensor.py", line 512, in _process_tick_generator
yield from _evaluate_sensor(
File "/dagster/dagster/_daemon/sensor.py", line 575, in _evaluate_sensor
sensor_runtime_data = code_location.get_external_sensor_execution_data(
File "/dagster-cloud-backend/dagster_cloud_backend/user_code/workspace.py", line 647, in get_external_sensor_execution_data
result = self.api_call(
File "/dagster-cloud-backend/dagster_cloud_backend/user_code/workspace.py", line 382, in api_call
return dagster_cloud_api_call(
File "/dagster-cloud-backend/dagster_cloud_backend/user_code/workspace.py", line 131, in dagster_cloud_api_call
for result in gen_dagster_cloud_api_call(
File "/dagster-cloud-backend/dagster_cloud_backend/user_code/workspace.py", line 280, in gen_dagster_cloud_api_call
raise DagsterUserCodeUnreachableError(error_infos[0].to_string())
Charlie Bini
04/13/2023, 8:01 PMdaniel
04/14/2023, 2:40 AMdaniel
04/14/2023, 2:41 AMdaniel
04/14/2023, 2:42 AMdaniel
04/14/2023, 2:43 AMCharlie Bini
04/14/2023, 3:04 PMdaniel
04/14/2023, 3:07 PMdaniel
04/14/2023, 3:08 PMCharlie Bini
04/14/2023, 3:09 PMCharlie Bini
04/14/2023, 3:09 PMCharlie Bini
04/14/2023, 3:11 PMrun_request_for_partition
to a direct RunRequest
but it still ran fine for about a day before problems starteddaniel
04/14/2023, 3:12 PMCharlie Bini
04/14/2023, 3:12 PMCharlie Bini
04/14/2023, 3:12 PMCharlie Bini
04/14/2023, 3:15 PMdaniel
04/14/2023, 3:15 PMdaniel
04/14/2023, 3:15 PMdaniel
04/14/2023, 3:16 PMCharlie Bini
04/14/2023, 3:16 PMdaniel
04/14/2023, 3:16 PMCharlie Bini
04/14/2023, 3:17 PMdaniel
04/14/2023, 3:17 PMdaniel
04/14/2023, 3:18 PMCharlie Bini
04/14/2023, 3:19 PMCharlie Bini
04/14/2023, 3:19 PMCharlie Bini
04/14/2023, 3:23 PM