I have a somewhat convoluted question, apologies. ...
# ask-community
d
I have a somewhat convoluted question, apologies. I am currently writing a job that does a series of transformation on a pandas dataframe. At the end of the pipeline, I'd like to persist the dataframe to a mysql database. I currently have this implemented as just a standalone
op
but based on the documentation I've read so far I have a feeling that there's a way to do this using defined assets and io managers. Is there a concrete example of this somewhere? Thanks, and sorry for the half-baked question.
s
I'm new to Dagster - but check out A custom IO manager that stores Pandas DataFrames in tables and SnowflakeIOManager - you might need something like that for MySQL
d
thanks!
c
Yep, Sashikanth pointed out a good example. We've just updated our documentation to contain an example using assets and IO managers as well: https://docs.dagster.io/concepts/io-management/io-managers#applying-io-managers-to-assets
👍 1
d
sorry, another quick question. Does this no longer work in 0.14.20?
Copy code
@repository
def my_repository():
    return [
        asset1,
        asset2,
        asset3,
        job1_schedule,
        job2_sensor,
        job3,
    ]
I'm getting the following error
Copy code
dagster.core.errors.DagsterInvalidDefinitionError: Bad return value from repository construction function: all elements of list must be of type JobDefinition, GraphDefinition, PipelineDefinition, PartitionSetDefinition, ScheduleDefinition, or SensorDefinition. Got value of type <class 'dagster.core.asset_defs.assets.AssetsDefinition'> at index 1.
c
Hi Dmitry. This will not work in 0.14.20, this is functionality we enabled only starting from 0.15.0 (released today)
d
👍