Something like that?
# announcements
n
Something like that?
f
Copy code
@pipeline(
    mode_defs=[
        ModeDefinition(name="local_dev", resource_defs={"database": mysql_database}),
        ModeDefinition(name="unittest", resource_defs={"database": csv_file_path})
    ],
)
n
This isn't something that needs environment-level replacement though
It's just a general purpose parameter
Hence asking about the solid config approach
f
I also thought it would be great having pipeline parameters but I think that the key concept of inputing data to the pipeline is based on the @resource component
n
I mean this is not optional, the whole point of this is to be parametereized
That seems like a very common use case
I have N files, they need processing, launch N pipeline runs, one for each.
f
Your resource could be a list of file paths. Then you merge all the data as one dataset and process it. Would that work?
n
No, again the whole point of this is one pipeline run per file, because there's an unknown number of them and they have to be updated at different times
Is this not a very common use case?
Pretty standard ETL processing.
f
Not sure then how this could be accomplish. I’m pretty new to Dagster. Sorry
n
I mean I laid out 3 different approaches, just trying to work out what the usual one is for the Dagster community.