https://dagster.io/ logo
Title
c

Clement Emmanuel

04/17/2023, 11:32 PM
I'm trying to use
execute_pipeline_iterator
or
execute_job
to launch a dagster run from within an op. I do this by passing the existing
instance
which I get from the
OpExecutionContext
of the parent op, this works and I can see the new run happening in the dagster UI, however i want to be able to control the execution of the child job, i.e. in a k8s context i want this to happen on a new pod, however by default this is happening on the same pod as the parent op. Is there something I can do here, or is there a different approach I should be taking?
☝️🏻 1
o

owen

04/18/2023, 3:52 PM
hi @Clement Emmanuel! the
execute_job
python api is fairly literal -- it executes the job within the current process. Instead, you could consider using the dagster graphql python client. This allows you to submit a job to be executed, and is functionally identical to hitting the "launch run" button in the UI (and so will have identical execution behavior)
c

Clement Emmanuel

04/18/2023, 3:54 PM
Thanks! That's actually exactly what I realized last night and it's working for me, good to hear that as validation from you as well
:blob_salute: 1