Hi, I have a few questions about the <execute_k8s_...
# ask-community
Hi, I have a few questions about the execute_k8s_job function which is executed from within an
. I've noticed that when you launch a k8s job through this function the @op will keep running until the containers inside the pod are shutdown. However, I am wondering if there is a way to launch the k8s job and move on to the next @op in a DAG when a readiness probe completes successfully. Meaning that the @op which launched the k8s job is marked as successful as soon as the readiness probe is complete and not when the containers inside shutdown. I have a scenario (see screenshot 1) in which op1 generates some data which should be used when launching the k8s job and op2 will then communicate with the pod launched by the execute_k8s_job function. Since I have not found a way to have the same flow as screenshot 1, I've created a DAG with the same flow as screenshot 2 in which op2 is launched alongside op1 and wrote some code so that op2 waits for the pod launched by the execute_k8s_job function before executing its logic. This works most of the time but sometimes issues arise due to resource limits on the node on which Dagster and its pods run and my job fails. I could keep adding custom logic to op2 so that these issues are prevented but it means I will have more code to maintain plus possibility of bugs so I am hoping that I can use a Kubernetes native method instead. Has anyone had this type of issue before? Is there a way to achieve the flow in the first screenshot given the scenario I just explained? Thank you in advance! 🙂
Hi Daniel - I don't think the execute_k8s_job function has this functionality currently as it waits for the job to complete, but you're welcome to fork the implementation of the function for your needs