‘Dagster as Serivce' - I was referring to
https://docs.dagster.io/deployment/guides/service.
The architecture which I'm thinking would very similar to
https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#deployment-architecture, with the only major difference that our actual jobs (solids) doing the data processing would be primary spark/scala and would not run on the cloud (kubernetes) environment, rather they will run on the bigdata cluster where docker/kubernetes are not available.
When you say, I should start ‘Run job' component - would you be able to refer to me something similar someone may have done.
Bottom line is, the long running services (dagit/dagster daemon) and repositories can reside in the cloud but the actual jobs can only run on the bigdata cluster.
I would have specific ports opened between cloud and bigdata cluster as part of that (which I need your guidance on what they would be)