Hi, my Dagster Instance is consuming a lot of RAM ...
# ask-community
i
Hi, my Dagster Instance is consuming a lot of RAM just to stay alive (about 2.9GB), is there any way to avoid that? Maybe it's related to the repositories, I have 3 of them.
d
Hi Ismael - I think the most useful thing to diagnose this would be a breakdown of the different processes (dagster/daemon/user code servers) and how much memory each of them is using. The first thing I'd check would be whether your Dagster code might be importing other Python modules that consume a lot of memory on import
I wouldn't expect dagster itself to use 3GB on memory - on my local machine a code location server uses <100MB of RAM
and dagit uses about 120MB
i
I'm importing a lot of libraries, yes. Also my daemon, repos and dagit are all on the same POD
But even so, it used to consume just 1,5GB
d
I've used this tool in the past to debug memory usage for running python processes https://github.com/facebookarchive/memory-analyzer
If running them on differnet pods using our helm chart is an option, that could helm isolate memory between different processes better
i
I don't have access to do these kind of thing here, haha The max I can do, is to create projects and configure them using the deploy to Openshift method, which includes a file of configurations and etc
d
I see - those are the tools that come to mind for debugging memory issues
i
I wish I had
But as you said, Dagster shouldn't use this much RAM, so maybe if I separate the repository and the daemon it should reduce a lot from this, right?
Wish existed a tutorial on how to do this on Openshift with a deploy.yaml file
d
It wouldn't separate out the total memory usage, but it would at least isolate them more - generally we recommend running dagster system code isolated from dagster user code
i
I don't understand this term. Means that I need to store my code somewhere else and call it from another place?
d
The recommendation would be to run each of your user code deployments in its own pod, dagit in another, and the daemon in a third. Kind of like how this docker example does: https://github.com/dagster-io/dagster/tree/master/examples/deploy_docker I think it's not trivial to set that up in kubernetes without using the helm chart though (which sets it up for you that way out of the box)
i
Humm, I've tried this before, but the POD of dagit couldn't communicate with the user code POD But this could be because on my job Openshift deployments need to run something on port 8080 and the user code deployment gets on on another port and on a different way
If the cluster don't detect it on port 8080, then it consider that nothing was deployed
But I'll try once more, I hadn't the knowledge I have now about these configurations
d
I would expect "dagster api grpc --port 8080" to start up the server on 8080
❤️ 1
i
Wow, that's a good hint
condagster 1
I'll try once more to create this architecture
428 Views