Hi Dagster team! I have a question about logging. ...
# ask-community
e
Hi Dagster team! I have a question about logging. I have some dagster code that I am attempting to run in a kubernetes pod using the dagster cli like so
dagster job execute …
. However, the only logs I see in the kubernetes pod is
Copy code
2022-03-01 03:07:12 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 7 - RUN_START - Started execution of run for "run_microdump".
2022-03-01 03:07:12 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 7 - ENGINE_EVENT - Executing steps using multiprocess executor: parent process (pid: 7)
2022-03-01 03:07:12 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 7 - get_full_copy_tables - ENGINE_EVENT - Launching subprocess for get_full_copy_tables
2022-03-01 03:07:13 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 10 - get_full_copy_tables - ENGINE_EVENT - Starting initialization of resources [devdb_tables_fetcher, io_manager].
2022-03-01 03:07:13 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 10 - get_full_copy_tables - ENGINE_EVENT - Finished initialization of resources [devdb_tables_fetcher, io_manager].
2022-03-01 03:07:13 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 7 - ENGINE_EVENT - Multiprocess executor: parent process exiting after 1.16s (pid: 7)
2022-03-01 03:07:13 +0000 - dagster - DEBUG - run_microdump - 900c13db-7ef0-4f9a-aa86-ca258e2e2ea9 - 7 - RUN_SUCCESS - Finished execution of run for "run_microdump".
and I don’t see logs from my python code itself. I’ve tried logging with both
<http://context.log.info|context.log.info>
and
get_dagster_logger
and both don’t seem to show up. I was wondering if there are any other configurations I should be checking in this case?
o
Hi Eunice! I believe what's going on here (although not 100% sure) is that those logs are just the logs that produced in the main process on the kube pod. When you run
dagster job execute
with the default multi-process executor, a new process will be spun up for each of the steps, and so logs from those steps will not appear in the host process' stdout. A similar issue was documented by a user here: https://dagster.slack.com/archives/C01U954MEER/p1636394317019700?thread_ts=1636384926.017800&amp;cid=C01U954MEER
In terms of mitigation, it's possible to update your logging configuration (discussed in those threads), or simply pull those logs from the dagster event database directly (as this is the centralized place where logs end up). I'm not sure what your current dagster/dagit setup looks like at the moment, so it's not clear if that would be easy/feasible.