Hi team, I launched a run in dagit and got this er...
# ask-community
Hi team, I launched a run in dagit and got this error:
Copy code
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"@1653050771.622000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3129,"referenced_errors":[{"created":"@1653050771.622000000","description":"failed to connect to all addresses","file":"src/core/lib/transport/error_utils.cc","file_line":163,"grpc_status":14}]}"

  File "c:\users\henry\anaconda3\lib\site-packages\dagster\grpc\client.py", line 107, in _query
    response = getattr(stub, method)(request_type(**kwargs), timeout=timeout)
  File "c:\users\henry\anaconda3\lib\site-packages\grpc\_channel.py", line 946, in _call_
    return _end_unary_response_blocking(state, call, False, None)
  File "c:\users\henry\anaconda3\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
    raise _InactiveRpcError(state)"
Then I went to dagit workspace and re-loaded the job (last reload was 3 days ago), and re-run the job and the error was magically gone, the job was executed successfully. I wonder what might be the reason causing above error? is there way to setup auto reload the job periodically in dagit workspace? Thank you!
cc @daniel any ideas on this?
Hi Henry, one way to do this is to run your own grpc server in a docker container like in the example here: https://docs.dagster.io/deployment/guides/docker - that way, docker will automatically bring the server back up when it goes down
👍 1
making dagit auto-reload a server when it goes down is a good idea too even when its local, i'll cut an issue for that
👍 1
@Dagster Bot issue If a local gRPC server goes down, Dagit should be able to reload / restart it without any action needed by a user
👍 1