Hi all, I have a file upload sensor which senses o...
# ask-community
a
Hi all, I have a file upload sensor which senses on a directory and uploads the files in there to a REST API as a job. The issue we face is that while the upload job runs, all other sensors and jobs defined in Dagster are blocked. For eg. There exists a DB ingestion sensor which stalls the when the upload job is being run. What is the default nature of Dagster to handle jobs concurrently initiated by sensors? Please direct me to configuration/documentation to control this aspect of Dagster. So that all sensors and jobs run in a concurrent fashion? Please tell me if you need more information on this?
dagster bot responded by community 1
z
Are you doing the file upload in the sensor code itself? Generally it's expected that you use a sensor to detect some sort of external state and then launch a run in which you do your heavier lifting. So in this context you might try detecting the files in the sensor and yielding a RunRequest with the list of files to a job that handles actually uploading the files
1
a
yes this is how I am doing this now.
I am doing the work inside the job and using the sensor only to generate the runRequest
Hi for the upload job I get keeping, 'Calling job_fn() 2023-09-07 165121 +0530 - dagster.daemon.SensorDaemon - INFO - Checking for new runs for sensor: send_file_to_dl_sensor 2023-09-07 165121 +0530 - dagster.daemon.SensorDaemon - INFO - Sensor send_file_to_dl_sensor skipped: Sensor function returned an empty result ' this is my dagster.yaml, ---- code_servers: local_startup_timeout: 120 sensors: use_threads: true num_workers: 8 num_submit_workers: 1 run_coordinator: module: dagster.core.run_coordinator class: QueuedRunCoordinator config: dequeue_use_threads: true dequeue_num_workers: 8 max_concurrent_runs: 8 tag_concurrency_limits: - key: "db_ingestion" limit: 2 - key: "dl_upload" limit: 2
z
What does your sensor logic look like? It sounds like you possibly have a condition where your RunRequest isn't being yielded