My schedules don't seem to run, even though the un...
# ask-community
d
My schedules don't seem to run, even though the underlying job does run, and sensors run. There are potentially two relevant messages in the logs:
Schedule schedule was started from a location foo.py that can no longer be fund in the workspace, or has metadata that has changed since the schedule was started.
Fork support  is only compatible with the epolll and poll polling strategies
a
did you run dagster-daemon?
d
Yes. Those are the logs from dagster-daemon
a
did you set same dagster home directory for daemon and dagit?
d
yes
a
Copy code
export DAGSTER_HOME=~/dagster_home
d
and the daemon status in dagit is showing green and my sensors are running
just not my schedules
a
sometimes, deleting everithing from dagster home directory and restarting helps
delete everithing except dagster.yaml
d
I am using the postgres storage for pretty much everything. Does that have to go as well?
a
not sure, I never use it
but you can try
for development for sure
p
Hmm… @Daniel Mosesson what version of dagster are you using? Are all the schedules turned on the way that you would expect? Are you starting the daemon the same way you’re starting dagit?
It’s odd that sensors are running but that schedules are not…
d
0.14.5 and yes
I looked at the grpcio repo, and there is a issue that seems related to version 1.44.0, going to try going to 1.43.0 and see if that changes anything
@prha Since downgrading to the the 1.43.0 version, I no longer get the note about fork support, but no change on the schedule running. Looking at the sensor runs close, I see that the sensor is running in the logs (I output some log messages) but in the UI it says that the sensor never ran
Log entries contain things like
Checking for new runs for sensor: <...>
,
Skipping run for <run key here>
etc
Anyone have any other thoughts what it could be?
d
Hi Daniel - would you mind sharing the contents of your workspace.yaml file? If you go to Dagit and click on Status => Schedules, is anything listed under "Unloadable Schedules"?
d
I don't see that section
I just see my two repositories, one schedule in each
I do see the next tick filled out correctly as well
Copy code
# workspace.yaml
load_from:
  - python_file:
      relative_path: <path in dagster home>
  - python_file:
      relative_path: <path in dagster home>
d
are you running dagster-daemon from the same folder as dagit?
And it also has access to foo.py?
d
yes
the job in foo.py runs when I run it from the launchpad
(and when it gets kicked off from the sensor as well)
all dagster* processes are running as the same user, which also owns dagster home
d
Is there any chance your dagit and daemon are running in different python environments? Or that your python environment has changed since you first started the schedule?
Did you turn the schedules and sensors on at the same time originally?
d
they are running in the same environment, and I have restarted them since the last update. Initially, somehow one was version 14.5 and one was 14.0 or something
d
One thing you could do is DM me the contents of your schedules.db file in your DAGSTER_HOME file, it's possible there is a clue in there
d
both sensors and schedules are set to be on by default in the code
I am using the postgres storage engine
I don't see any schedule related tables there though
d
Any chance you could DM the output of "SELECT * FROM jobs"?
in postgres
d
I can't DM because that is on a different network, but: • I see four jobs, three have a job type of schedule, one has a job type of sensor • two of the schedules have a status of running • the job body seems like everything is pointing in the right places (right python file, right working dir) • the origin is of type
ExternalJobOrigin
, and the status listed in the job_body is running
d
the contents of the ExternalJobOrigin is what I was hoping to see
and whether it differs between a sensor that is working, and a schedule that is not
d
I can retype, give me a few
d
even better if its from the same repository
d
I have that
Copy code
# not working schedule
"origin": {
  "__class__": "ExternalJobOrigin",
  "external_repository_origin": {
     "__class__": "ExternalRepositoryOrigin",
     "repository_location_origin": {
        "__class__": "ManagedGrpcPythonEnvRepositoryLocationOrigin",
        "loadable_target_origin": {
           "__class__": "LoadableTargetOrigin",
           "attribute": null,
           "executable_path": "/usr/local/bin/python3" #running python 3.9 built from source
           "module_name": null,
           "package_name": null,
           "python_file": <dagsterhome>/foo.py
           "working_directory": <dagsterhome>
          },
          "location_name": foo.py
         },
         "repository_name": foo_repository
       },
       "job_name": "schedule"
     }
the sensor is the same except for a different executable path (/usr/local/bin/python3.9)
d
OK, that shouldn't matter (we are actively working on making it not matter) but I think it does matter, and I suspect that that is your problem. Did the executable path somehow change between when you started the sensor and when you started the schedule?
d
it could have
even though those are symlinks to the same file?
d
If you turn the schedule off and on again in dagit, and you're sure that dagit and the daemon are using the same executable path, that should work
d
trying now
d
yeah, this is a big rough edge that we are fixing in the next couple of weeks, apologies that you ran into it
That "or has metadata that has changed since the schedule was started" message you ran into originally will be gone soon, at which point minor differences like this will no longer matter
d
that made a change, I now see copies of the schedules under unloadable
waiting for it to file
*fire
that did it
thank you thank you
d
no problem, we'll get this fixed. To confirm, you have running schedules in dagit, and also unloadable schedules in dagit? You should be able to turn off the unloadable ones
what you want is for dagit and the daemon to be using the exact same executable path, and for there to be no unloadable schedules in dagit
d
I think it all would have been avoided had I used the "packaged" version of dagit as opposed to
python -m dagit ...
got it