Hi, what is the recommended approach for launching...
# ask-community
s
Hi, what is the recommended approach for launching ephemeral jobs and capturing the run information ? i.e., an experiment that is executed exactly once, and whose code is not published to the workspace. I am currently submitting an awsbatch job, which calls 'dagster.execute_pipeline' and redirects the storage to shared postgres. Wondering if there is a better way. Ideally it should be the other way around - i.e., the job is submitted to run queue on daemon, which in turn submits the dynamic job to ECS/Batch for execution
j
whose code is not published to the workspace
Putting code in the workspace is the standard way for the daemon to be able to launch it
We currently don’t really support “ephemeral” jobs like you’re describing, though I could imagine patching something together. Is the underlying job code changing so frequently that you can’t push an image with it? Could some of those changes be captured in config?
s
They are experiments and yes, the code will change often. The experiments just happen to be long running dags and we want to leverage ECS/Batch compute to run the jobs. I see 'dagster job launch -f' command. Can this be leveraged some how to use DockerRunLauncher ?
j
Yep it will use the run launcher, it will need the code to be built in a docker image