Hi there, loving the product, and pushing it to th...
# ask-community
s
Hi there, loving the product, and pushing it to the team at the moment. I have a question on using Capturing Python Logs from a code location perspective. Looks like the only way to capture logs from imported libraries is to update
dagster.yaml
file This is problematic as each code location will have their own logs to capture, yet they can’t communicate that through code location. Even if we allow users to modify
dagster.yaml
, we will still need to restart the services to validate the changes, incurring downtime. Is there any better workflow to manage this need?
s
Hi Sam, Thanks for the question. I’m not quite understanding this:
This is problematic as each code location will have their own logs to capture, yet they can’t communicate that through code location…Even if we allow users to modify dagster.yaml, we will still need to restart the services to validate the changes, incurring downtime.
• Are you seeking
managed_python_loggers
scoped to individual code locations? • It seems like needing to modify/reload dagster.yaml is a separate issue to the code location scoping?
s
Hello sean, I am trying to figure out the workflow for separate teams to append new
managed_python_loggers
.
Are you seeking
managed_python_loggers
scoped to individual code locations?
It seems like needing to modify/reload dagster.yaml is a separate issue to the code location scoping?
Although scope is nice-to-have, what I am after is the ability to append new
managed_python_loggers
from code location. Scene: There are 4 teams: • Infra • Bronze code location owner • Silver code location owner • Gold code location owner Infra is the only one with control to dagster.yaml, and ability to restart service for dagster to pick up changes. Scenario: Silver code location imported new package, and want those logs to appear in dagster Ideal workflow: Silver code location updates their Definition, or somewhere within their code location to declare that.
Copy code
defs = Definitions(
    assets=[*source_assets, *all_assets],
    schedules=[daily_refresh_schedule],
    resources=_get_inferred_resources(),
    managed_python_loggers=["omega-star", "galactus"]
)
Current workflow to my understanding: 1. Silver team make a PR to our helm values.yaml 2. We merge it 3. Find a time without scheduled run and do a
helm upgrade
s
Thanks for the thorough explanation Sam-- I’m pretty sure we only offer a way to configure
managed_python_loggers
across the entire instance at this time. However, I can definitely see the value in being able to control this at the code location level. If you open an issue, we can see what we can do: https://github.com/dagster-io/dagster/issues