I'm getting the following error when running a sen...
# ask-community
I'm getting the following error when running a sensor
Copy code
2021-05-27 15:26:00 - SensorDaemon - ERROR - Sensor daemon caught an error for sensor ftps_sensor : grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
        status = StatusCode.DEADLINE_EXCEEDED
        details = "Deadline Exceeded"
        debug_error_string = "{"created":"@1622129159.300676954","description":"Error received from peer ipv4:","file":"src/core/lib/surface/call.cc","file_line":1066,"grpc_message":"Deadline Exceeded","grpc_status":4}"

Stack Trace:
  File "/usr/local/lib/python3.8/site-packages/dagster/daemon/sensor.py", line 224, in execute_sensor_iteration
    yield from _evaluate_sensor(
  File "/usr/local/lib/python3.8/site-packages/dagster/daemon/sensor.py", line 254, in _evaluate_sensor
    sensor_runtime_data = repo_location.get_external_sensor_execution_data(
  File "/usr/local/lib/python3.8/site-packages/dagster/core/host_representation/repository_location.py", line 699, in get_external_sensor_execution_data
    return sync_get_external_sensor_execution_data_grpc(
  File "/usr/local/lib/python3.8/site-packages/dagster/api/snapshot_sensor.py", line 40, in sync_get_external_sensor_execution_data_grpc
  File "/usr/local/lib/python3.8/site-packages/dagster/grpc/client.py", line 288, in external_sensor_execution
    chunks = list(
  File "/usr/local/lib/python3.8/site-packages/dagster/grpc/client.py", line 97, in _streaming_query
    yield from response_stream
  File "/usr/local/lib/python3.8/site-packages/grpc/_channel.py", line 426, in __next__
    return self._next()
  File "/usr/local/lib/python3.8/site-packages/grpc/_channel.py", line 826, in _next
    raise self
My sensor connects to a FTP and iterates through the folders. Do you think the error could be that it takes too long to finish the scan?
too long to finish the scan
yep - I forget if we made the timeout adjustable yet @daniel
yeah, I think you're going to run into trouble right now if you have a sensor that takes more than 60 seconds to execute (and even if we increased the timeout, eventually the daemon itself would time out). In the short term is it an option to have it break its work up into chunks (possibly with a short interval so that it can start right back up again shortly after it finishes)? We let you set cursors on the sensors that should make it easier to break the work up into pieces: https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#sensor-optimizations-using-cursors
cc @prha as well who may have thoughts
thanks!! i'll give it a try!
Yeah, I think trying to break up the work is the best bet. We might be able to do a better job with the error message and point people to the sensor cursor documentation