Been pushing hard to get dagster implemented at large corporates (with little luck), but I can talk a bit about the usual problems these orgs have with Airflow. This is relevant (I think) because Dagster solves some of the problems larger orgs have, but it also suffers from some of the same problems - so you’d probably end up with a similar situation.
So. Larger Airflow installs I’ve seen eventually ran into the problem of centrally hosting the application by a platform team (because hosting Airflow is hard). This team got frustrated that datascience teams (it’s usually the scientists) break the instance by doing something like using an “unsupported” version of pandas. The responses I’ve seen are one of two flavours: “we need more control of the user code” < usually through a rigid CI/CD pipeline and process, or “not my problem” < usually through self-service-airflow, where you request an installation per team and then you’re on your own. Of course, this is not all bad, because it also means you can only see / control your own jobs, but this is a downside at the same time: you risk implementing the same jobs over and over again in different teams.
I believe Airflow has made some progress in this area, but I don’t think they completely solved it yet (could be wrong here).
With dagster this separation between dagster / dagit and user-deployments is much stricter, making it super hard to crash the scheduler by having a faulty job. However, since there is no hierarchy in the jobs that are shown, no access control for teams, etc, I think having a separate install for teams probably still makes sense. So for instance, one install for the customer behaviour team, one for the financial analytics team, one for the engineering team doing the integrations of the legacy systems, etc.
This has the added benefit that the individual installs remain relatively manageable, and things like naming conventions can work (because you can walk up to someone who just called his job “process data”)