Hey y'all. I'm running dagster in Kubernetes using...
# ask-community
j
Hey y'all. I'm running dagster in Kubernetes using the helm chart. However I don't like how it spins up a new job everytime (and new pod) that I have to cleanup. It also requires me to use up resources (memory and CPU) for the "user code deployment" workload that matches what I need for each job's pod - wasteful. Is there a way to use the DefaultRunLauncher? I tried configuring it in the runLauncher yaml section in the values.yaml for my helm package, but I get an error that only the k8, celery, or a custom run launcher are supported. Basically, I'd like all my jobs to run on a dedicated pod that's pre-spun up with the appropriate resources. And ideally not waste cpu and memory with the user code deployment pod sitting there if it's only being used to clone...
d
Hey jayme - it’s possible to configure the run pods to have different resources limits than the user code deployment pods, would that help? I do think you could configure the default run launcher as a “custom” run launcher though - the main downsides would be that because every run is now in the same pod it’s easier for them to disrupt each other. They’ll also stop running if you roll the user code deployment in the middle of a run.
j
Thanks Daniel. The solution I’m using this for is a small business owner with about 20ish employees. They will only be running one job at a time and doing updates during off hours. How would I go about setting up the default run launcher as a custom one? And would the runs then be using the daemon, or user code pod for resources? Thanks for your help Daniel!
d
The runs would be using the user code pod in that case
j
Excellent
d
I think this would work:
Copy code
customRunLauncher:
  module: dagster
  class: DefaultRunLauncher
  config: {}
j
Sweet! I’ll give it a try and get back to you. Dagster is incredible. I would’ve killed to have technology like this earlier in my career!
condagster 1
🙏 1
👍 1
Works beautifully @daniel thanks again man!
d
I know you seem to be moving away from the K8sRunLauncher, but you can alleviate manually cleaning up/deleting completed pods by using these values in your Helm template
Copy code
runLauncher:
  config:
    k8sRunLauncher:
      runK8sConfig:
        jobSpecConfig:
          ttlSecondsAfterFinished: 7200
Credit on this one to @daniel (as usual)