Let’s say I have a single job in which half the op...
# deployment-kubernetes
Let’s say I have a single job in which half the ops can run on the main k8s job and over half the ops that need to be spun out as their own k8s job. Whats the best way to do this? Is it to use the default k8s executor and create a manual
/`execute_k8s_job` for every desired isolated container? Is that a way to create a container per op by default? And is there a convenient way to spin up containers that by default use the same image as the main dagster job?
using https://docs.dagster.io/_apidocs/libraries/dagster-k8s#dagster_k8s.k8s_job_executor in combination with the https://docs.dagster.io/_apidocs/libraries/dagster-k8s#dagster_k8s.K8sRunLauncher might be what you're looking for the RunLauncher will create a k8s job for each dagster JobRun, the. within that job the k8s_job_executor will create more k8s jobs for every step(op)
i think what i was hoping for, was something along the lines of…
Copy code
def return_five():
    return 5

def add_one(arg):
    return arg + 1

def do_stuff():

defs = Definitions(
    jobs=[do_stuff], executor=multiprocess_executor
and i know, that this doesn’t exactly make sense as there is no
, but if there were, then it would essentially, by default be able to inherient the properies of the master process
what’s weird about this, if i’m understanding dagster correctly, is that the k8s_jobs_process is meant to be an encapsulating process for all ops
so the top level job still runs multiple processes, but in a single k8s job (and ergo pod), by default
you understand it correctly, currently the lines between the different abstractions for execution are a bit blury