On one of my pipelines I’m getting a memory outage...
# deployment-ecs
j
On one of my pipelines I’m getting a memory outage. It looks like it’s creating the smallest containers possible. Is there anyway to tell it to bump up the CPU and Memory? I thought it mimicked the allocated CPU and memory of the pipelines task definition …
j
It’s mimicking the CPU and memory of the Task Definition that it was generated from. In most cases, that’ll be the daemon task definition because that’s where the process that initializes the EcsRunLauncher is running. I agree that we should improve it so CPU/memory (and other overrides) can be customized - until we do that, I think you have two options: 1. Up the CPU/memory of your daemon Task Definition. 2. Instead of allowing the EcsRunLauncher to construct its own Task Definition, you can specifically pass it an existing Task Definition in your dagster.yaml: https://github.com/dagster-io/dagster/blob/0be25391c79101c064359b4db6d3e489cfa7b6f7/python_modules/libraries/dagster-aws/dagster_aws/ecs/launcher.py#L54-L62 Both are sort of one-size-fits-all solutions - we’ll need to extend it to allow varying the CPU/memory per execution.
j
I can live with that for now. But yes, execution level control will be desireable for the future. Thank you!!
m
I'm also interested in execution level CPU & Memory configuration options. Is there a Github issue I can track for progress here?
j
I’ll create one
thankyou 1