Still struggling to build the following which soun...
# ask-community
s
Still struggling to build the following which sounds like it should be simple, yet can’t seem to get the right pieces working. I have several Airbyte connectionIDs along with associated dbt models each in its own subfolder. I have a yaml file with the correct mappings of connectionIDs and dbt model folder path. What I’m looking at doing is generating separate pipelines for each connectionID so thatI can trigger a sync and run the dbt model whenever I choose. I have been able to write a graph that creates a job that ends up reading the yaml file and running the syncs and dbt models for all the mappings in the yaml file, however that’s not what i’m looking to do as I want individual and separate pipelines created from that template.
s
cc @owen
o
Instead of parsing that yaml file inside of the job itself, one option would be to write a function which parses that yaml file and returns a list of jobs (one for each of the connectionID/dbt pairs). This might look something like:
Copy code
def get_jobs_from_yaml(my_yaml):
     jobs = []
     for connection_id, dbt_folder in my_yaml.items():
          @job(name="my_job_{dbt_folder}", resource_defs=...)
          def _job():
               dbt_run_op(start_after=airbyte_sync_op())
          jobs.append(_job)
     return jobs

@repository
def my_repo():
    return get_jobs_from_yaml(...)
s
sar, have you been able to solve your problem using owen’s advice?
s
Hey Sean! I haven’t had a chance to mess with it this week as i’m traveling but will do so when i get back next week