Curious if there is any plan to make it easier to track airflow tasks as upstream dependencies for our Dagster asset jobs.
We have our central data infra team running Airflow to orchestrate all the ETL workloads for the upstream tables. However, our use-case specific teams (like ML, Experimentation) are running dynamic assets on Dagster. Most of these assets have dependencies on those upstream tables managed by Airflow and we are running a sensor for each asset to check the status of the upstream Airflow task by checking the Airflow metadata DB. This approach has been quite error prone and adds a huge burden for the individual teams