https://dagster.io/ logo
#ask-community
Title
# ask-community
j

Johannes Müller

07/13/2023, 1:46 PM
Hi, It seems like my job is run immediately when I reload the code location:
Copy code
#sudo docker-compose up docker_pipeline_code_location
docker_pipeline_code_location is up-to-date
Attaching to docker_pipeline_code_location
docker_pipeline_code_location          | generating table # debug output from the running pipeline
...
This happens even though the schedule is configured explicitly with
default_status=DefaultScheduleStatus.STOPPED
. The code location fails to reload in dagit for the first 10 minutes with the error that it cannot be reached while it is running the pipeline. Afterwards it works as expected. Can I prevent the job from running when I reload the code location? The Docker file looks like this:
Copy code
...
# Run dagster gRPC server on port 4000

EXPOSE 4001
CMD ["dagster", "api", "grpc", "-h", "0.0.0.0", "-p", "4001", "-m", "customer_io"]
z

Zach

07/13/2023, 3:16 PM
How is your job being defined / referenced? It feels like you're running
.execute_in_process()
or something similar at the global level
c

chris

07/13/2023, 5:02 PM
What zach said, another potential option is that the daemon has leftover ticks to get through, and is just reading them off the queue even thought the schedule is stopped
j

Johannes Müller

07/14/2023, 6:07 AM
@Zach Oh that is great news! I thought that was default behaviour. I just define a schedule like this. I don't call any execute functionality at all.
@chris Leftover ticks sounds like a distinct possibility, since the job runs every 10 minutes. I'll investigate that. Is there a general solution to prevent that from happening?
I can confirm that I can see these ticks in the scheduler overview.
Running the pipelines before reloading the code location takes a significant amount of time. Is there some way to clean those left over ticks out?