Hey Aaron - the daemon and dagit never actually directly load your code - they communicate with a separate process that loads your code over a gRPC interface, and only pass around structured summaries/snapshots of the Dagster code objects in your repositories. So the main variable that would cause the daemon to need more memory would be if you have a particularly large or intricate graph, but the size of the code (or the amount of work that the code does, or the time that it takes to import it) shouldn't matter for the daemon worker latency.
This is one big difference between dagster and airflow, which has one or more centralized scheduler processes that loads your code directly.