Hi, We have dagster deployed on Kubernetes with E...
# deployment-kubernetes
a
Hi, We have dagster deployed on Kubernetes with EFS (https://aws.amazon.com/efs/) mounted as PersistentVolume to Kubernetes. We have used official helm for this, with some modifications for EFS PV/PVC in main dagster helm chart. We would like to use DBT as well with Dagster. Both of them, require file system path directory, in order to work. Dagster needs it in 
workspace.yaml
 for 
python_package:
 or 
python_file:
DBT needs it for its SQL Models. Now, as far as, Dagster is running in 
in_process_executor
 mode, they (Dagster & DBT Solids) can easily access the EFS, as: 1. We have mounted EFS to main Dagster Deployment. 2. All solids will be executed in same Dagster Deployment as a system process, which will also has the access to EFS. But the question comes, when Dagster will be running in 
celery_k8s_job_executor
 mode, where each solid is executed in a separate ephemeral kubernetes pod.
_*In 
celery_k8s_job_executor
 mode, will DBT solids which will get executed in a separate ephemeral k8s pods, have access to the EFS?*_
Because, the EFS PV/PVC changes are done for main dagster deployment. I am not that much expert in k8s, but is it possible, if those ephemeral kubernetes pod uses the same main helm chart image as base image, which already has a mount with EFS, the DBTs will have access to EFS as well? It will be really great if someone can give some feedback on this. Thanks.
j
Hi @Akshat Nigam - unfortunately volume mounts for ephemeral compute aren’t well supported by Dagster right now. This is on my radar and I’m looking to get support out in one of the next few releases