Trying to figure out the best way to add logging f...
# ask-community
s
Trying to figure out the best way to add logging for code that doesn't have easy access to a context variable. I noticed some weird behavior when I tested get_dagster_logger() in a graph though. When I was running dagit locally, debug/info level messages failed to show up at all, while critical messages printed out to terminal as soon as dagit started (so before I had selected to run any job from the repo.py in the dagit ui). Running the graph via a job, did not reprint the messages into the dagit logging ui panel nor did they get replicated for each run through. When doing the same thing within op code or python code that the ops call, the logging behavior was exactly as I would expect: in the dagit ui, all levels of logging shown (provided toggles in the ui are set) and printed out for each run of the op. Can someone help me figure out why the behavior differs between the two?
dagster bot responded by community 1
z
by "tested get_dagster_logger() in a graph" do you mean something like
Copy code
@graph
def do_graph():
    logger = get_dagster_logger()
    <http://logger.info|logger.info>("print something")
    op()
?
s
yes
and then making that graph into a job via toJob
then running the job via the repo / dagit
z
yeah that's not going to work the way you think it will. a graph definition is only executed once, when your code is imported / compiled, in order for dagster to determine the dependency ordering for the graph. at that point the machinery for capturing logs to the dagster console probably doesn't exist yet. all your logging should happen within ops, as that's what actually gets executed when you run a job
s
Okay that makes sense (I kinda suspected that the graph code was run once at start to transform the ops in a way that dagster could better manage / understand)
Appreciate the info and help 🙂
z
no problem!