Hi, is there any information comparing execution overheads for the various executors?
I have used the k8s executor before and I know had lots of latency introduced by pod creation overheads, thinking maybe can shave some time there by using dask or celery. if nothing empirical just anecdotal evidence will do 😄
🤖 1
s
sandy
06/22/2022, 2:50 PM
@johann - do you have thoughts here?
j
johann
06/22/2022, 2:54 PM
Lowest overhead would of course be in_process or multi_process- each run will take place inside a single K8s Job, so you’ll only pay the startup latency once.
Celery or dask- it’s probably possible to acheive lower latency than
k8s_job_executor
if you deploy the workers with your job code. We don’t provide this option out of the box in the Helm chart