Marc Keeling
04/20/2022, 10:25 PMAlec Ryan
04/21/2022, 12:50 AMStefan Adelbert
04/21/2022, 1:45 AMRUN_SUCCESS
, but these appear to be logged at the DEBUG
log level. I could lower the log level of my custom logger to DEBUG
, but I don't want to capture DEBUG
logging in general.
I there a way to elevate the log level of dagster events from DEBUG
to INFO
?Huib Keemink
04/21/2022, 7:41 AMphương đinh
04/21/2022, 7:47 AMAman Saleem
04/21/2022, 8:12 AMIn Progress
state at a time that taking a lot time for processing runs. Here is my configuration setting.
Any suggestion for making it faster will be helpful.
run_coordinator:
module: dagster.core.run_coordinator
class: QueuedRunCoordinator
config:
max_concurrent_runs: 50
tag_concurrency_limits:
- key: GET_MERCHANT_LISTINGS_ALL_DATA
limit: 20
- key: GET_MERCHANT_LISTINGS_ALL_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: GET_FBA_FULFILLMENT_REMOVAL_ORDER_DETAIL_DATA
limit: 20
- key: GET_FBA_FULFILLMENT_REMOVAL_ORDER_DETAIL_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: GET_FBA_FULFILLMENT_REMOVAL_SHIPMENT_DETAIL_DATA
limit: 20
- key: GET_FBA_FULFILLMENT_REMOVAL_SHIPMENT_DETAIL_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: GET_FBA_FULFILLMENT_INVENTORY_HEALTH_DATA
limit: 20
- key: GET_FBA_FULFILLMENT_INVENTORY_HEALTH_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: GET_FBA_MYI_UNSUPPRESSED_INVENTORY_DATA
limit: 20
- key: GET_FBA_MYI_UNSUPPRESSED_INVENTORY_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: GET_RESERVED_INVENTORY_DATA
limit: 20
- key: GET_RESERVED_INVENTORY_DATA
limit: 2
value:
applyLimitPerUniqueValue: true
- key: list_inventory_supply
limit: 20
- key: list_inventory_supply
limit: 2
value:
applyLimitPerUniqueValue: true
Sara
04/21/2022, 12:06 PMMark Fickett
04/21/2022, 3:00 PMbattery_data_job_local - c49da2c4-6a0f-41b2-a477-f9baa34afc1f - 1741477 - data_pipe_graph_sasquatch_anode.raw_data_graph._publish_raw_data_trace_context - STEP_FAILURE - Execution of step "data_pipe_graph_sasquatch_anode.raw_data_graph._publish_raw_data_trace_context" failed.
dagster.core.errors.DagsterExecutionLoadInputError: Error occurred while loading input "pipe" of step "data_pipe_graph_sasquatch_anode.raw_data_graph._publish_raw_data_trace_context"::
dagster.check.CheckError: Invariant failed.
Stack Trace:
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/core/execution/plan/utils.py", line 47, in solid_execution_error_boundary
yield
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/core/execution/plan/inputs.py", line 607, in _load_input_with_input_manager
value = input_manager.load_input(context)
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/core/storage/fs_io_manager.py", line 152, in load_input
context.add_input_metadata({"path": MetadataValue.path(os.path.abspath(filepath))})
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/core/execution/context/input.py", line 325, in add_input_metadata
if self.asset_key:
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/core/execution/context/input.py", line 216, in asset_key
check.invariant(len(matching_input_defs) == 1)
File "/home/mfickett/Documents/mfickett/dev/data-pipeline/.direnv/python-3.9.5/lib/python3.9/site-packages/dagster/check/__init__.py", line 1167, in invariant
raise CheckError("Invariant failed.")
The op in question is:
@op(
ins={"start": In(Nothing)},
)
def _publish_raw_data_trace_context(context):
publish_current_trace_context(context)
And the pipe
parameter it mentions is an enum value (produced as a constant form a different op, passed through a @graph.
I'm not really sure where to look next, any suggestions?Mark Fickett
04/21/2022, 3:37 PMMultiprocess executor: child process for step data_pipe_graph_sasquatch_anode.sanitation_graph._publish_sanitation_trace_context unexpectedly exited with code 1
dagster.core.executor.child_process_executor.ChildProcessCrashException
instead of the actual stack trace from the child? I've seen it a few times today. I got one for a keyring library error (I think it was a secretstorage.exceptions.ItemNotFoundException but didn't grab it out of the console).Alec Ryan
04/21/2022, 4:22 PMAlec Ryan
04/21/2022, 4:22 PMCharlie Bini
04/21/2022, 4:28 PMField
in a config_schema
? Sort of like Any
but restrict it to only one or two typesArun Kumar
04/21/2022, 7:05 PMSELECT job_ticks.id, job_ticks.tick_body
FROM job_ticks
WHERE job_ticks.job_origin_id = $1
ORDER BY job_ticks.id DESC LIMIT $2
Liezl Puzon
04/21/2022, 7:06 PMcursor=cc5f4949-61b3-4269-a07e-fb49aac68e0e
Nicholas Buck
04/21/2022, 8:36 PMBrooke Talcott
04/21/2022, 9:03 PMjasono
04/21/2022, 9:21 PMException while cleaning up compute log capture. Exception: Timed out waiting for tail process to start.
Austin Bailey
04/21/2022, 10:32 PMLiezl Puzon
04/21/2022, 10:35 PMStarting
with the GraphQL API but I’m getting this error. How do I force termination?
"message": "Run 54c52064-4bb7-49ef-a924-42375ddfd201 could not be terminated due to having status STARTING."
Bryce Baker
04/22/2022, 1:51 AMdagit -f hello_cereal.py
from within the \jobs directory.
Error loading repository location hello_cereal.py:FileNotFoundError: [WinError 2] The system cannot find the file specified: 'hello_cereal.py'
Bryan Chavez
04/22/2022, 3:17 AMSon Giang
04/22/2022, 4:07 AMSanidhya Singh
04/22/2022, 6:22 AMgeoHeil
04/22/2022, 11:05 AMMark Fickett
04/22/2022, 1:46 PMdictConfig
in an application I'm porting to Dagster. They make use of a logging filter as well as file handlers. Is there a way to make my logging.dictConfig
call get along with Dagster logging setup? I can use the python_loggers
configuration in dagster.yaml
to capture their output into Dagit, but that only works if I remove the dictConfig
call. I see that I could add handlers and formatters via dagster_handler_config
as described in the docs, but it's a little painful to port my current logging config dict setup to dagster.yaml
, and I would also lose the filters.Aaron Bailey
04/22/2022, 3:35 PMgeoHeil
04/22/2022, 4:03 PMDockerRunLauncher
.
For both of dagit
and dagster-daemon
I have enabled docker-in-docker by mounting: https://github.com/dagster-io/dagster/blob/master/examples/deploy_docker/docker-compose.yml#L61 /var/run/docker.sock:/var/run/docker.sock
But I only get:
DockerException: Error while fetching server API version: ('Connection aborted.', PermissionError(13, 'Permission denied'))
File "/opt/conda/lib/python3.9/site-packages/dagster/core/instance/__init__.py", line 1698, in launch_run
self._run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace))
File "/opt/conda/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 152, in launch_run
self._launch_container_with_command(run, docker_image, command)
File "/opt/conda/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 97, in _launch_container_with_command
client = self._get_client(container_context)
File "/opt/conda/lib/python3.9/site-packages/dagster_docker/docker_run_launcher.py", line 72, in _get_client
client = docker.client.from_env()
File "/opt/conda/lib/python3.9/site-packages/docker/client.py", line 96, in from_env
return cls(
File "/opt/conda/lib/python3.9/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
File "/opt/conda/lib/python3.9/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
File "/opt/conda/lib/python3.9/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
raise DockerException(
when executing:
docker compose --profile dagster up --build
I am running Docker for Mac how can I get dagster to work nicely in this setup?Gabriel Montañola
04/22/2022, 4:16 PM0.14.10
was set only on Helm deployment.
---
Hi there folks.
I'm trying to use this feature from 0.14.10
but I'm not sure I'm doing it right.
I added this to my deployments
on the Helm chart:
includeConfigInLaunchedRuns:
enabled: true
But jobs are still created without environment variables derived from env
and envSecrets
. Also, container_context
from created jobs is null
.
Do I need to change other settings so this can work as intended?Liezl Puzon
04/22/2022, 5:55 PMcurl
?Arun Kumar
04/22/2022, 9:59 PMdagster.core.errors.DagsterInstanceMigrationRequired: Instance is out of date and must be migrated (Postgres event log storage requires migration). Database is at revision 9c5f00e80ef2, head is f4b6a4885876. Please run `dagster instance migrate`.
Original exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.LockNotAvailable: canceling statement due to lock timeout
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/sql.py", line 62, in handle_schema_errors
yield
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/utils.py", line 166, in create_pg_connection
yield conn
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/event_log/event_log.py", line 153, in store_event
(res[0] + "_" + str(res[1]),),
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1296, in execute
future=False,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1602, in _exec_driver_sql
distilled_parameters,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.errors.LockNotAvailable) canceling statement due to lock timeout
[SQL: NOTIFY run_events, %s; ]
[parameters: ('da8a1351-3190-4f25-8b26-825ac7ed42c5_14158049',)]
(Background on this error at: <https://sqlalche.me/e/14/e3q8>)
File "/usr/local/lib/python3.7/site-packages/dagster/core/execution/api.py", line 748, in pipeline_execution_iterator
for event in pipeline_context.executor.execute(pipeline_context, execution_plan):
File "/usr/local/lib/python3.7/site-packages/dagster/core/executor/multiprocess.py", line 244, in execute
event_specific_data=EngineEventData.multiprocess(os.getpid()),
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 897, in engine_event
step_handle=step_handle,
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 312, in from_pipeline
log_pipeline_event(pipeline_context, event)
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 221, in log_pipeline_event
dagster_event=event,
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 336, in log_dagster_event
self.log(level=level, msg=msg, extra={DAGSTER_META_KEY: dagster_event})
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 351, in log
self._log(level, msg, args, **kwargs)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1514, in _log
self.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1524, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1586, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
self.emit(record)
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 243, in emit
handler.handle(dagster_record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
self.emit(record)
File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 135, in emit
self._instance.handle_new_event(event)
File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1174, in handle_new_event
self._event_storage.store_event(event)
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/event_log/event_log.py", line 153, in store_event
(res[0] + "_" + str(res[1]),),
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/utils.py", line 166, in create_pg_connection
yield conn
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/sql.py", line 83, in handle_schema_errors
) from None
Arun Kumar
04/22/2022, 9:59 PMdagster.core.errors.DagsterInstanceMigrationRequired: Instance is out of date and must be migrated (Postgres event log storage requires migration). Database is at revision 9c5f00e80ef2, head is f4b6a4885876. Please run `dagster instance migrate`.
Original exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.LockNotAvailable: canceling statement due to lock timeout
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/sql.py", line 62, in handle_schema_errors
yield
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/utils.py", line 166, in create_pg_connection
yield conn
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/event_log/event_log.py", line 153, in store_event
(res[0] + "_" + str(res[1]),),
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1296, in execute
future=False,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1602, in _exec_driver_sql
distilled_parameters,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1863, in _execute_context
e, statement, parameters, cursor, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2044, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 207, in raise_
raise exception
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1820, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.OperationalError: (psycopg2.errors.LockNotAvailable) canceling statement due to lock timeout
[SQL: NOTIFY run_events, %s; ]
[parameters: ('da8a1351-3190-4f25-8b26-825ac7ed42c5_14158049',)]
(Background on this error at: <https://sqlalche.me/e/14/e3q8>)
File "/usr/local/lib/python3.7/site-packages/dagster/core/execution/api.py", line 748, in pipeline_execution_iterator
for event in pipeline_context.executor.execute(pipeline_context, execution_plan):
File "/usr/local/lib/python3.7/site-packages/dagster/core/executor/multiprocess.py", line 244, in execute
event_specific_data=EngineEventData.multiprocess(os.getpid()),
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 897, in engine_event
step_handle=step_handle,
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 312, in from_pipeline
log_pipeline_event(pipeline_context, event)
File "/usr/local/lib/python3.7/site-packages/dagster/core/events/__init__.py", line 221, in log_pipeline_event
dagster_event=event,
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 336, in log_dagster_event
self.log(level=level, msg=msg, extra={DAGSTER_META_KEY: dagster_event})
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 351, in log
self._log(level, msg, args, **kwargs)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1514, in _log
self.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1524, in handle
self.callHandlers(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 1586, in callHandlers
hdlr.handle(record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
self.emit(record)
File "/usr/local/lib/python3.7/site-packages/dagster/core/log_manager.py", line 243, in emit
handler.handle(dagster_record)
File "/usr/local/lib/python3.7/logging/__init__.py", line 894, in handle
self.emit(record)
File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 135, in emit
self._instance.handle_new_event(event)
File "/usr/local/lib/python3.7/site-packages/dagster/core/instance/__init__.py", line 1174, in handle_new_event
self._event_storage.store_event(event)
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/event_log/event_log.py", line 153, in store_event
(res[0] + "_" + str(res[1]),),
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/dagster_postgres/utils.py", line 166, in create_pg_connection
yield conn
File "/usr/local/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/local/lib/python3.7/site-packages/dagster/core/storage/sql.py", line 83, in handle_schema_errors
) from None
daniel
04/22/2022, 10:05 PMdagster instance migrate
call may not have actually finished? Not sure if that's the root cause of the bug - but Database is at revision 9c5f00e80ef2, head is f4b6a4885876
implies to me that it didn't actually get all the way to the end of the schema migrationArun Kumar
04/22/2022, 10:34 PMdagster instance migrate
job again which eventually succeeded. Not sure how that error still appears in the job. Is it possible that DB went to an inconsistent state due to the first error and the second migration did not fix it?prha
04/22/2022, 10:41 PMselect * from alembic_version;
Arun Kumar
04/22/2022, 10:44 PMprha
04/22/2022, 10:45 PMArun Kumar
04/22/2022, 10:45 PMversion_num
--------------
9c5f00e80ef2
(1 row)
prha
04/22/2022, 10:47 PMDatabase is at revision 9c5f00e80ef2, head is f4b6a4885876
reflects that your DB is in the most recent known migration state as of 0.14.5
, but that the code raising the exception believes that the last known migration is one from 0.12.11
Arun Kumar
04/22/2022, 10:54 PMprha
04/22/2022, 10:57 PMhead
, which is f4b6a4885876
. In our sequence of migrations, f4b6a4885876
is an earlier, older migration than the revision that your DB is currently marked at, which is 9c5f00e80ef2
. This tells me that you did in fact successfully run the schema migration.f4b6a4885876
tells me that the dagster version of the code raising the exception is older than the dagster version of the code that actually migrated the DB.Hebo Yang
04/22/2022, 11:26 PMdaniel
04/22/2022, 11:29 PMHebo Yang
04/22/2022, 11:33 PMprha
04/22/2022, 11:34 PMHebo Yang
04/22/2022, 11:34 PMprha
04/22/2022, 11:37 PMdaniel
04/22/2022, 11:38 PMHebo Yang
04/22/2022, 11:42 PMArun Kumar
04/29/2022, 2:08 AM