Hey :wave: We're getting a `sqlalchemy.exc.Operati...
# dagster-plus
s
Hey 👋 We're getting a
sqlalchemy.exc.OperationalError: (psycopg2.errors.QueryCanceled) canceling statement due to statement timeout
error from one of our sensors that's iterating over a GCS bucket and generating a run request for each file. We have a serverless cloud deployment. Is there a way to increase the statement timeout in Dagster Cloud? Traceback:
Copy code
File "/dagster/dagster/_daemon/sensor.py", line 489, in _process_tick_generator
    yield from _evaluate_sensor(
  File "/dagster/dagster/_daemon/sensor.py", line 626, in _evaluate_sensor
    existing_runs_by_key = _fetch_existing_runs(
  File "/dagster/dagster/_daemon/sensor.py", line 740, in _fetch_existing_runs
    runs_with_run_keys = instance.get_runs(filters=RunsFilter(tags={RUN_KEY_TAG: run_keys}))
  File "/dagster/dagster/_utils/__init__.py", line 697, in inner
    return func(*args, **kwargs)
  File "/dagster/dagster/_core/instance/__init__.py", line 1500, in get_runs
    return self._run_storage.get_runs(filters, cursor, limit, bucket_by)
  File "/dagster-cloud-backend/dagster_cloud_backend/storage/host_cloud/run_storage/storage.py", line 449, in get_runs
    rows = self._readall(query)
  File "/dagster-cloud-backend/dagster_cloud_backend/storage/host_cloud/cloud_storage/mixin.py", line 31, in _readall
    return conn.execute(query).fetchall()
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1380, in execute
    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
    return connection._execute_clauseelement(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement
    ret = self._execute_context(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context
    self._handle_dbapi_exception(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 2124, in _handle_dbapi_exception
    util.raise_(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
    raise exception
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context
    self.dialect.do_execute(
  File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
    cursor.execute(statement, parameters)
  File "/usr/local/lib/python3.8/site-packages/ddtrace/contrib/dbapi/__init__.py", line 138, in execute
    return self._trace_method(
  File "/usr/local/lib/python3.8/site-packages/ddtrace/contrib/psycopg/patch.py", line 70, in _trace_method
    return super(Psycopg2TracedCursor, self)._trace_method(
  File "/usr/local/lib/python3.8/site-packages/ddtrace/contrib/dbapi/__init__.py", line 107, in _trace_method
    return method(*args, **kwargs)
d
Hi Simo - is it possible to share how many RunRequests this sensor is currently returning? This may be a case where using a cursor and/or limiting the number of runrequests processed by each tick may help: https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors#sensor-optimizations-using-cursors
if the number of run requests being yielded in each tick is growing unboundedly then increasing the timeout could help for a bit but might just be a bandaid. If it's possible to share the code of your sensor we could take a look and see if anything jumps out
s
@daniel I forgot to get back to you. We refactored the job so that only one request per sensor tick is generated and that fixed the issue for us 🙂