Hey team, I have an opinion based question. A coup...
# announcements
Hey team, I have an opinion based question. A couple of months ago (maybe a year) dagster recommended to deploy dagster using dask or airflow. However, in recent months A scheduler for dagster was released and recently better deployment options on k8. What were the reasons behind this shift and would you still recommend to deploy using airflow scheduler? And if anyone there tried and settled on one of both. What were your reasons? I’m currently exploring both (airflow and dagster scheduler). Personally I’m more inclined to the idea of airflow being a general use scheduler (cron on steroids) and have other data tools like dagster has native plugins mechanism with it.
👍 2
Hey @Marwan - so there are two components here, scheduling and execution. For scheduling, we’ve found that many of our users who aren’t already on Airflow don’t want to bother with setting it up as a dependency, and we aim to provide fully Dagster-native scheduling as a viable production deployment option. On the other hand, many of our users already have Airflow installations, and so we’re planning continued investments in
in the coming weeks. For execution, our goal is to provide a lot of flexibility because our users have a wide range of deployment environments. As you noted, we’ve been working on first-class k8s support quite a bit lately—but we also support a wide range of other execution targets including in-process, multi-process, Dask, Celery, and Airflow. For a simple deployment, you can probably get away with a beefy single-node Dagster VM instance; heavier workloads you can use Celery or Dask (potentially layered on top of k8s). If you’re not already using Airflow, I’d recommend going that route only if you have other use cases which require Airflow, just because it adds additional complexity
👍 1
Thanks @nate for your response. I’m very excited to hear about upcoming upgrades to