paul.q02/11/2021, 9:47 PM
alex02/11/2021, 11:18 PM
max02/11/2021, 11:22 PM
onto the underlying
max02/12/2021, 1:42 AM
paul.q03/22/2021, 10:10 PM
alex07/21/2021, 2:48 PM
display of these messages also useful for you?
paul.q07/24/2021, 12:52 AM
to clean up and 'normalise' what's sent to
. Up until now, that effort is entirely contained in our logger implementation where its pure python and easy enough to debug as well. A further option is to write another package on top of python logging which would give us control of everything that's written to disk - essentially transplanting the logic from our dagster logger implementation. But we then wouldn't get the benefit of seeing user messages (via context.log) in the
UI would we? We've also built a REST API that gets pipeline run stats together with messages about pipeline/solid failures (via GraphQL using the logs). I guess these would continue to work because event logs would be unaffected? Let us know if it's worth waiting or should we switch approaches. Thanks Paul
object to it along with
. With the
dict passed to it, I munge it into a string and add it to the message, before calling the `context`'s
method at the end - so after that it's over to our custom json logger. Inside that I can unmunge the extra dict out of the message, add the elements into the log record and also clean up the message (to removed the munged bit). It all works fine, except that the message that appears in the dagit console includes the munged portion as well. In our JSON logger, the message is cleaned up as desired. What I don't understand is: isn't the same log record being passed to all the handlers in the custom Dagster logger? If so, we would expect to get the same in the console log as we see in the json log records?
alex07/26/2021, 2:34 PM
But we then wouldn’t get the benefit of seeing user messages (via context.log) in the dagit UI would we?They would no longer be in the structured event stream, but the raw stdout/stderr logs should be visible via dagit assuming you have your
set-up correctly for your deployment. Its not as nice as the structured event stream entries but there is still should be a way to see it
I guess these would continue to work because event logs would be unaffected?Yep this should be right.
I munge it into a string and add it to the message, before calling the context’s log.log method at the end
What I don’t understand is: isn’t the same log record being passed to all the handlers in the custom Dagster logger? If so, we would expect to get the same in the console log as we see in the json log records?The loggers should all receive whats passed to the context’s log method, which is what sounds like is happening? I could be misunderstanding the details.