sometimes the docs hint at workflows with examples...
# dagster-feedback
r
sometimes the docs hint at workflows with examples that are simplified to the point where it's hard to see how to transport it to a real-world codebase, e.g. for asset observations you give the example (https://docs.dagster.io/concepts/assets/asset-observations):
from dagster import AssetObservation, op
Copy code
from dagster import AssetObservation, op


@op
def observation_op(context):
    df = read_df()
    context.log_event(
        AssetObservation(asset_key="observation_asset", metadata={"num_rows": len(df)})
    )
    return 5
where the asset is loaded inside the op, but this goes against most of your own recommendations about abstracting loading logic away in IO managers which are then not known/instantiated until
Definitions
is instantiated.
another example is
materialized_to_memory
which starts with a
data_source
asset with hard-coded loading, side-stepping cases involving e.g.
SourceAsset
where you are reliant on an IO manager. https://docs.dagster.io/concepts/testing#testing-multiple-software-defined-assets-together
t
Thanks for sharing this feedback! I'm taking away two things from this: 1. Our doc's examples are too simple and difficult to apply to real life a. Resolution: I'll review our docs, highlight, and triage these guidances to see which ones our team should redo. 2. Our docs often conflict with recommendations and patterns. Sometimes this is an artifact of us making new features/best practices or refining old ones and not updating the docs thoroughly a. Resolution: Let me take this example and run it back to the team. We're strongly opinionated on what we believe are patterns and anti-patterns, so it's important we keep these stances consistent. Thank you again for bringing this up! If you see any other issues like these, please continue to voice it because your feedback is valuable.
r
Thanks for reacting Tim 🙂 Ideas for resolution sound great, I'll keep an eye out for other examples.