Hey y'all. Been running dagster in a single GKE (Google Kubernetes Engine) cluster managed by their "Autopilot" service. This is for a small business that can run their staging and production in the same cluster, but in separate namespaces. The business owner and I have been reviewing costs and we see there's a lot of excess cpu and memory for dagster resources. There's a "cost optimization" feature in GKE that shows what each workload has requested, and how much they are using.
Autopilot clusters have a minimum 0.5 vCPU and 2GiB request for containers in a pod. So since there is the dagit, dagster daemon, and user code workloads each of these are consuming at least this much even though they're barely being used. Autopilot automatically scales up and down to control costs, but only if you're above this minimum threshold.
I'm wondering if you have any suggestions for how I can get the 3 dagster related resources to share resources better.
Here's a screenshot of the cost utilization, I deleted the product name section of the workloads for his privacy and put a red square around the dagster related workloads.