Love the new Sensor testing -- just :chefkiss: . I...
# dagster-feedback
s
Love the new Sensor testing -- just chefkiss . I've noticed an issue, though, when I create a sensor with
asset_selection
. The basic gist is: if at least one asset in the graph requires manual config, then the evaluated sensor job (from the launchpad) will never be launchable from the launchpad. However, it will run as expected when the sensor triggers it. Example in 🧵
❤️ 2
Copy code
from dagster import asset, sensor, Definitions, AssetSelection, RunRequest, Definitions

# one asset requires config, the other does not
@asset(config_schema={"foo": str})
def foo(context):
    return context.op_config["foo"]

@asset()
def bar(context):
    return "bar"

# the sensor should yield an asset run just of bar, which requires no config
@sensor(asset_selection=AssetSelection.assets(bar))
def baz():
    yield RunRequest(run_key="foo")

defs = Definitions(
    assets=[foo, bar],
    sensors=[baz],
)
If you run the
baz
sensor normally, it will yield of a run of
bar
correectly. But if you evaluate in the launchpad, the resulting run will not be launchable, because
foo
is missing config.
Not a big deal, but something to be aware of. Ideally I could actually run the test run, not just check whether config is being populated.
v
Maybe it’s a related issue to what I raised in https://dagster.slack.com/archives/C01U5LFUZJS/p1677252353705589 ?
🤔 1
s
I think the issue here might be that the asset selection isn't making it to the launchpad. @chris - thoughts on this?
c
Hey yall - I think this is similarly an issue of properly routing the selection through to the launchpad. Will investigate soon. WHey - yea I think this is similarly a routing issue.
👍 1
🌈 1