# ask-community

Simon Frid

02/24/2023, 8:17 PM
I asked this #dagster-kubernetes, but perhaps it’s better suited here. Is it possible to define an executor on a per-op or a per-asset basis? I’d like to create a pipeline that mixes different job executions.
Copy code
def small_compute_job():
    return 5

def big_compute_job(arg):
    return math.pow(arg, arg)

def do_stuff():

defs = Definitions(
    jobs=[do_stuff], executor=multiprocess_executor
even more so, it would be great to be able to define some computation context for an asset
Copy code
def a_very_big_table(arg):
    return some_hyper_parallelized_tasks(arg, n=1000)


02/24/2023, 8:20 PM
This is very much on our radar as a hole in the current execution model - there are some changes in the works to let you specify the way that particular steps should be handled within a single run (like you can with, say, IO managers) instead of making it all-or-nothing

Simon Frid

02/24/2023, 8:38 PM
1. what’s your timeline for introducing such changes?
2. would it be easier to define every op as it’s own k8s job and then override some jobs as co-located on the same k8s job?