How do the gRPC code repositories work on k8s when...
# ask-community
a
How do the gRPC code repositories work on k8s when job pods are spawned? Is the code being executed on the remote pod? IE the resource allocation has to be devoted to the code repository? Will I start running into issues when I have lets say 100 jobs running from one code repo?
a
the
DefaultRunLauncher
will launch runs as subprocesses on the code server, though the standard helm deploy for k8s uses
K8sRunLauncher
There are calls made to the code server before a run starts to verify that the arguments to the run will result in a valid job definition and to create the execution plan. These calls should be fast though so I would only expect it to be problematic if there is very high load, very low resources, or very high complexity in the definitions
sensor and schedule evaluation is another thing that happening on the code server, which if computationally expensive could impact other calls. The grpc code server uses threads to handle requests and serve responses. The number of workers can be controlled via cli arg
-n, --max-workers