https://dagster.io/ logo
#deployment-kubernetes
Title
# deployment-kubernetes
g

Guillaume Onfroy

07/14/2023, 2:59 PM
Is there a way to lunch runs within the
code location
pod and not in a separate pod (e.g. from the
k8sRunLauncher
)? I was hoping to run steps directly within the initial user deployment container to avoid really long cold starts we get with separate jobs. It feels like the
runLauncher
config should allow something like that:
Copy code
runLauncher:
  type: DefaultRunLauncher
But it doesn't...
r

rex

07/14/2023, 3:16 PM
It supports this. You just have to use
type: CustomRunLauncher
.
Copy code
runLauncher:
  type: CustomRunLauncher
  customRunLauncher:
     module: dagster
     class: DefaultRunLauncher
     config: {}
See this previous discussion.
g

Guillaume Onfroy

07/14/2023, 3:17 PM
Ah great! Thanks a lot ❤️
@rex I'm still seeing really long run cold starts, even then. Is there a way to mitigate that?
My assumption was that all that the cold start was coming from all the assets being freshly imported and parsed in the new container.
But apparently not
r

rex

07/14/2023, 3:50 PM
Are you getting any improvement at all? there’s the container startup time, kubernetes pod allocation, etc that is required if you want isolations in your jobs. since you don’t want isolations, the
DefaultRunLauncher
will run the job directly in your long standing code server. But if you are running multiple jobs in this code server, execution will get bogged down since your code server may not have enough resources to handle this simultaneous execution. I would start by beefing up your code server resources.
g

Guillaume Onfroy

07/14/2023, 4:03 PM
I'm only seeing some really minor improvement
Here are the numbers:
Copy code
K8sRunLauncher
RUN_ENQUEUED >> 133 sec >> RUN_START >> 107 sec >> STEP_START

DefaultRunLauncher
RUN_ENQUEUED >> 103 sec >> RUN_START >> 98 sec >> STEP_START
I will do a quick test with much beefier resources but, I dunno, I don't feel like it's coming from that.
4 Views