Hi folks, I've gotten a Helm deployment of dagster...
# ask-community
s
Hi folks, I've gotten a Helm deployment of dagster running, with my own code deployment in a docker image so that's all good. Couple of questions: • When I fire up my helm deployment, I see the code deployment image running. When I run a job, it spawns a new instance of that container with the args to execute that run. ◦ Is it possible to have jobs run on the pre-existing code deploy container? • When the container is fired up it starts the GRPC API server like
dagster api grpc -h 0.0.0.0 -p "3031" --module-name mymodule
. I'd like to learn more about GRPC with this as a practical example. I've tried using
grpcurl
to get a list of the available endpoints though that doesn't work. Where could I find the available endpoints?
c
Regarding your first question, not possible to have jobs run on the code server - dagster will always spin up a new container. You could potentially circumvent this by writing your own RunLauncher, but that’s honestly quite a bit of hassle. Checking with the team regarding your second question
s
That's fine thank you. The rationale behind the question was that I can run jobs on
dagster dev
locally quite quickly. When I deploy to k8s it takes quite a while for the pod to spin up. It's possible this is due to k8s itself, cluster being lower-powered as it's non-production, though I haven't looked into it deeply yet