Hey guys, I'm restarting the dagster user code con...
# ask-community
k
Hey guys, I'm restarting the dagster user code container at the start of every pipeline
c
hey! checking in with the team on this.
k
thanks
I'm kinda implementing this refresh logic in prod, so pls let me know if its a good idea or not xd
hey @chris any update on this ?
hey @johann can you pls advice me on it ?
j
What run launcher are you using
If the DefaultRunLauncher, restarting the grpc server would kill in progress runs
If a different run launcher (e.g. Docker, ECS, K8s), the runs should continue unaffected. It’s possible you’d hit timeouts on requests to the grpc server, e.g sensor ticks
k
No, I'm using dockerunlauncher, I've multiple pipelines which needs to update some variables everytime they run
so Before running any pipeline, I'm restarting the grpc server by using python dagster client
Copy code
client.shutdown_repository_location( REPO_NAME
Since there are multiple pipelines whose schedules are independent, each of them will restart user code docker container on their run to fetch the updated var values before running
can this create any problem in long run due to this race condition ?
j
fetch the updated var values before running
I don’t really understand this- restarting the gRPC server will make it pick up changes to jobs etc., but your job is already launched right
k
yeah, changes need not to be reflected at the very same run
but should be updated for the future run, would've liked a solution to update the variables before as well
but that doesnt seem to be possible, pls correct me If I'm wrong
j
env vars?
k
some variables that I need to pass to pipeline , they are stored in a key value store
and they can updated by anyone at any time
So I was hoping to find a way to get the updated values, and pass them to pipeline before dagster run
j
Could you just reach out to the KVS from the run itself
k
not an exact solution as even cron dagster schedule are stored in kvs as well
need to update them as well
j
hmm interesting. Well once the run is started, I think that would be the only way to get new var values for the run. Maybe you would combine that approach with periodically restarting the grpc server
It also seems like it may be better to just have a regular interval at which the grpc server restarts to pick up new values, on a separate schedule
killing it at the start of every run seems messy, e.g. you could have a bunch of runs start near the same time and restart it unnecessarily
k
yeah, that seems like a safe solution. Just to be sure, Restarting it periodically wont cause any problems with the pipelines execution, right ? given I'm using dockerRunLauncher
j
It won’t interrupt running pipelines, no
it may stop a schedule or sensor tick, but they retry
👍 1