Hi everyone! I was wondering how people have set u...
# integration-dbt
p
Hi everyone! I was wondering how people have set up dev environments when using dbt and bigquery? My current thoughts are: For every production dataset, we: 1. create a new dev environment that will be called:
dev_<user_name>_<dataset_name>
2. create a clone of every table in the dataset and add it to the new dataset 3. Fake all the production materializations so that the local dagster has all the up-to-date data on materializations. I wonder if this is a reasonable approach compared to what other people have done in the past.
t
Hey Pablo! There are many different ways to set this up, and your way is pretty reasonable. Have you thought about how these clones are managed? You might not need to mock the materializations, depending on your dagster project. Are you using partitions or leaning completely on incremental models?
p
We are using partitions