# ask-community


01/06/2023, 6:59 AM
Hi, I'm working on an op that processes a dataframe and make several API calls for each row. My initial implementation was to use DynamicOutput to manage the downstream ops. However, the APIs have rate limits and managing a few hundred thousand of parallel ops at one time seems quite expensive. Any other ideas on how I can approach this?
🤖 1


01/06/2023, 5:55 PM
Hi YH. You can limit the number of parallel ops that are running at a time by specifying an op concurrency limit on the default multiprocess executor:


01/07/2023, 2:10 AM
Yes thanks @claire I found that after reading some other threads. On a side note, I might also redesign the way I chunk the data, so that each parallel op handles more than one row of data. I can then use pyrate to rate limit the subsequent calls. Since I would be using dagster-serverless, I think there would already be a natural limit based on cpu cores too