https://dagster.io/ logo
#dagster-support
Title
# dagster-support
g

geoHeil

05/15/2022, 12:45 PM
AssetGroup.from_package_module
potentially in combination with schedule_from_partitions or build_job is fairly convenient. How can I keep this convenience, but a) instanciate one job for each individual asset (part of the group) and b) allow for potential deviations in the job launch configuration i.e. some might be partitioned, some not, some have some specialties i.e. resources (pyspark, ....) might be required (which are not needed by default.
j

jamie

05/16/2022, 5:03 PM
Hey @geoHeil this isn't currently supported, but i will make a GH issue to track it and get it in our backlog!
@Dagster Bot issue make a job per asset with AssetGroup.from_package_module
d

Dagster Bot

05/16/2022, 5:04 PM
s

sandy

05/16/2022, 8:52 PM
What's the reason that you're looking to instantiate one job per asset?
g

geoHeil

05/16/2022, 9:18 PM
Perhaps a single one is too fine granular. But at least per domain. But my reason is as follows: (and part of one of the other questions): https://dagster.slack.com/archives/C01U954MEER/p1652600973766459 i.e. different schedules for several (unique) datasets/assets.
Is this the wrong assumption? Or would this even not be required?
s

sandy

05/16/2022, 10:43 PM
I see - that makes sense. Do you have an idea of what your dream API would look like?
g

geoHeil

05/17/2022, 11:02 AM
My rationale was: So far I needed one job per sensor and was thinking that as these assets might be sensor triggered I will also need one for each then. Regarding API: perhaps a simple for-loop along the lines of: for asset in my_asset_group.asset_items: make_job(a) would already do the trick if the asset metadata contained any deviations from the base conditions.