https://dagster.io/ logo
Title
p

paul.q

06/08/2021, 7:52 AM
Hi, we're attempting an upgrade to 0.11.11 from 0.10.9. We've done
dagster instance migrate
and it reported no errors. However, the dagster-daemon startup reports:
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "bulk_actions" does not exist ...
Looking at our postgres DB I can see that there is no 'bulk_actions' table created. How can we get this table created properly? Our dagster.yaml looks like so:
run_storage:
  module: dagster_postgres.run_storage
  class: PostgresRunStorage
  config:
    postgres_db:
      <usual postgres config>  
event_log_storage:
  module: dagster_postgres.event_log
  class: PostgresEventLogStorage
  config:
    postgres_db:
      <usual postgres config>     
scheduler:
  module: dagster.core.scheduler
  class: DagsterDaemonScheduler
schedule_storage:
  module: dagster_postgres.schedule_storage
  class: PostgresScheduleStorage
  config:
    postgres_db:
      <usual postgres config>
d

daniel

06/08/2021, 2:00 PM
Hi paul - do you have the ability to run psql on your postgres DB, and if so would it be possible to paste the result of "SELECT * FROM alembic_version"?
p

paul.q

06/08/2021, 10:09 PM
Sure, it's 7cba9eeaaf1d
d

daniel

06/09/2021, 1:56 AM
got it - and just to confirm, you didn't change anything else about your postgres setup or config other than the version upgrade?
follow-up question, there's no chance you know what your alembic revision was prior to the upgrade perchance? e.g. from a backup before the upgrade?
assuming not - here are the steps I'd recommend to complete the migration. This should be safe, but out of an abundance of caution I'd recommend backing up your DB first assuming you want to be sure of keeping your data. • Run "DROP TABLE alembic_version;" on your postgres DB (this tells our migrate script to start from the beginning) • Run 'dagster instance migrate' again. It should complete successfully now and create the table. If you do happen to have a dump or backup of your DB from before you first migrated and it didn't create the table, that would be very useful
p

paul.q

06/09/2021, 3:35 AM
Well I do have the old alembic version: 4ea2b1f6f67b This was fetched from the postgres db we use for another instance that's still at 0.10.9. Will wait for your answer on this before I go the nuclear DROP TABLE option. What other information would you like about the old DB Daniel?
d

daniel

06/09/2021, 1:55 PM
If it's possible to run
pg_dump --schema_only <your db name> > schema.txt
on a DB that's in the state it was before you migrated and send me the resulting text file, we could use that to try to reproduce the problem on our end
p

paul.q

06/09/2021, 10:26 PM
Hope this is what you're after Daniel
d

daniel

06/10/2021, 1:52 AM
thanks paul, this is perfect, I'll try upgrading from 10.9 with that schema and see if I can reproduce the problem
OK, that was extremely instructive. Question for you - did you possibly add a PostgresScheduleStorage to your dagster.yaml at a different later time than the other two storages? (It's fine if you did, I think it just uncovered a bug)
Asking because the dump you gave me didn't include any of the schedules table that are usually included with PostgresScheduleStorage - the first time you run dagster with that set in your instance, it automatically creates them
(Also i'm pretty confident that the 'nuclear' option we discussed will work, although I'd still save the data first just in case - I tried it using your dump and it created all the right tables)
p

paul.q

06/10/2021, 4:54 AM
Thanks for investigating Daniel. Yes we did add PostgresScheduleStorage at a later date than the other two. That was simply because I think the other two pre-dated 0.10.x (when dagster-daemon was introduced) and we upgraded to 0.10.x without initially noticing the ScheduleStorage option.
d

daniel

06/10/2021, 12:31 PM
Great, thanks for your help investigating, I filed an issue here for us to address this going forward: https://github.com/dagster-io/dagster/issues/4275
(But the mitigation in the thread here should work for you for now)