Hi Scott.
The executor is responsible for how the ops in the job are executed. As you’ve identified, by default, using the multiprocess executor, this will spin up a separate process for each step and run them concurrently. You could switch to the inprocess_executor, but this will limit the concurrency for other steps as well.
There are other executors that allow for more fine-grained control, at the expense of having a more complicated setup. The celery k8s executor , for example, sets up celery queues that allows for concurrency limits on a per-op basis. You can read more about it here:
https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm-advanced#overview
We do provide built-in support for limiting run-level concurrency using run coordinators (see
https://docs.dagster.io/deployment/run-coordinator#limiting-run-concurrency) but taking advantage of that in your situation would mean restructuring your job to be split up across multiple jobs (or job/sensors).