I asked this <#C014N0PK37E|dagster-kubernetes>, bu...
# ask-community
s
I asked this #dagster-kubernetes, but perhaps it’s better suited here. Is it possible to define an executor on a per-op or a per-asset basis? I’d like to create a pipeline that mixes different job executions.
Copy code
@op
def small_compute_job():
    return 5

@op(executor=k8s_executor)
def big_compute_job(arg):
    return math.pow(arg, arg)

@job
def do_stuff():
    big_compute_job(small_compute_job())

defs = Definitions(
    jobs=[do_stuff], executor=multiprocess_executor
)
even more so, it would be great to be able to define some computation context for an asset
Copy code
@asset(executor=my_custom_high_cpu_k8s_executor)
def a_very_big_table(arg):
    return some_hyper_parallelized_tasks(arg, n=1000)
d
This is very much on our radar as a hole in the current execution model - there are some changes in the works to let you specify the way that particular steps should be handled within a single run (like you can with, say, IO managers) instead of making it all-or-nothing
s
1. what’s your timeline for introducing such changes?
2. would it be easier to define every op as it’s own k8s job and then override some jobs as co-located on the same k8s job?