Hi Daniel, I had noticed that however due to the nature of the script that is executing, I am surprised. We are not loading any data into the container, just continuously polling an API and checking the response. Once the response matches what we expect, it will complete the script execution. This same job configuration works when spawning it with our in process executor inside of 1 docker container, and we have no container limits set in either use case. I'm working on turning off our auto remove setting so I can inspect the crashed container. It seems like it fails the job almost instantly upon script execution with the celery executor, where as with the in process executor it executes completely.