upgrading from an older version of dagster to newe...
# ask-community
j
upgrading from an older version of dagster to newer (1.2.1)..seems like dagster is having an issue w/ pre-existing schedules after the update?
Copy code
dagster._core.errors.DagsterInvariantViolationError: InstigatorState 4bac3b8c656c8a19dde9d79e81c105e115a1fc26 is already present in storage
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster_graphql/implementation/utils.py", line 130, in _fn
    return fn(*args, **kwargs)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster_graphql/implementation/fetch_schedules.py", line 31, in start_schedule
    schedule_state = instance.start_schedule(
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster/_core/instance/__init__.py", line 2154, in start_schedule
    return self._scheduler.start_schedule(self, external_schedule)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster/_core/scheduler/scheduler.py", line 98, in start_schedule
    instance.add_instigator_state(started_state)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster/_core/instance/__init__.py", line 2272, in add_instigator_state
    return self._schedule_storage.add_instigator_state(state)
  File "/root/.pyenv/versions/3.9.0/lib/python3.9/site-packages/dagster/_core/storage/schedules/sql_schedule_storage.py", line 157, in add_instigator_state
    raise DagsterInvariantViolationError(
The above exception was caused by the following exception:
sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "jobs_job_origin_id_key"
DETAIL: Key (job_origin_id)=(4bac3b8c656c8a19dde9d79e81c105e115a1fc26) already exists.
is there a standard remediation procedure for such things?
p
Hi Jose. What version of dagster were you upgrading from? Did you run
dagster instance migrate
after upgrading?
a
Could be related to https://github.com/dagster-io/dagster/issues/11053 - I've seen that error before when converting schedules to sensors
j
from 0.14.2 to 1.2.1
hm, i see, so this dagster instance migrate command must be run. we’re currently using an ECS cluster, so i am sure there’s a variety of ways of achieving this. sounds like the procedure would be: 1. ensure no dagster services/runs are ongoing 2. run
dagster instance migrate
to perform the migration
p
We should be able to support the version upgrade without the migration, but you are hitting a bug which is causing the constraint violation (maybe due to a sensor/schedule rename that Adam pointed out).
If running the migration is a big hassle, I can try to dig into another code-based workaround if you can include a little more about the schedules/sensors you have running and maybe a snapshot of the
jobs
table you have in your DB.