during pipeline execution
# announcements
c
during pipeline execution
d
The answer here depends on the details of your pipeline and execution environment - if you know which solid is using too much memory, it may require changing the code of the solid to use less memory (which wouldn’t involve any changes to dagster). If your pipeline is running on a single machine and are starting to run into resource limits, it may be time to consider one of the other deployment options here, that give you more control over the resource limits on your machine: https://docs.dagster.io/deployment . Our kubernetes integration gives you some good tools for scaling out your pipelines when they run into limits. If the problem comes from running lots of pipelines at once, the techniques here can help: https://docs.dagster.io/deployment/run-coordinator#limiting-run-concurrency