https://dagster.io/ logo
Title
g

George Pearse

08/17/2021, 8:27 PM
Hit this error
Operation name: JobMetadataQuery

Message: (psycopg2.errors.QueryCanceled) canceling statement due to statement timeout

[SQL: SELECT event_logs.id, event_logs.event 
FROM event_logs 
WHERE event_logs.run_id = %(run_id_1)s ORDER BY event_logs.id ASC
 LIMIT ALL OFFSET %(param_1)s]
[parameters: {'run_id_1': 'ec1ead9e-874a-4de0-b0a8-da5f0b544890', 'param_1': 0}]
(Background on this error at: <https://sqlalche.me/e/14/e3q8>)

Path: ["pipelineRunsOrError","results",0,"assets"]

Locations: [{"line":60,"column":3}]

Stack Trace:
  File "/usr/local/lib/python3.7/site-packages/graphql/execution/executor.py", line 452, in resolve_or_error
    return executor.execute(resolve_fn, source, info, **args)
  File "/usr/local/lib/python3.7/site-packages/graphql/execution/executors/sync.py", line 16, in execute
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/schema/pipelines/pipeline.py", line 270, in resolve_assets
    return get_assets_for_run_id(graphene_info, self.run_id)
  File "/usr/local/lib/python3.7/site-packages/dagster_graphql/implementation/fetch_assets.py", line 59, in get_assets_for_run_id
    records = graphene_info.context.instance.all_logs(run_id)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1013, in all_logs
    return self._event_storage.get_logs_for_run(run_id, of_type=of_type)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/event_log/sql_event_log.py", line 234, in get_logs_for_run
    events_by_id = self.get_logs_for_run_by_log_id(run_id, cursor, of_type)
  File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/event_log/sql_event_log.py", line 201, in get_logs_for_run_by_log_id
    results = conn.execute(query).fetchall()
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1263, in execute
    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 324, in _execute_on_connection
    self, multiparams, params, execution_options
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1462, in _execute_clauseelement
    cache_hit=cache_hit,
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1815, in _execute_context
    e, statement, parameters, cursor, context
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1996, in _handle_dbapi_exception
    sqlalchemy_exception, with_traceback=exc_info[2], from_=e
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
    raise exception
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1772, in _execute_context
    cursor, statement, parameters, context
  File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 717, in do_execute
    cursor.execute(statement, parameters)
But found the screenshot below when I actually looked through the logs and think the error presented to the Dagit UI was just 'directly caused' by this error.
a

alex

08/17/2021, 8:44 PM
hm thats not a particularly heavy query - not sure why it would be taking longer than 5 seconds what postgres DB are you using ? does it have reasonable amount of resources ?
g

George Pearse

08/24/2021, 9:23 AM
@alex good chance that the pipeline code was competing for resource with the DB. Haven't had them isolated from each other well enough.