Hi there, new to Dagster and I am setting up a fai...
# ask-community
m
Hi there, new to Dagster and I am setting up a fairly straightforward pipeline that pulls data from an API, processes and inserts into our postgresql DB. I am struggling to figure out how to manage connecting to different databases, ie. my local testing instance versus production. At the moment, I have something like this where I have a graph and two different jobs defined that read in a different YAML file that contains either my local testing credentials or the production credentials.
Copy code
@graph
def update_db_metadata():
    """
    Generalised job that will pull all metadata required. There is a test and prod instance of each below.
    """
    process_data(pull_data(connect_api()))

# Test job - will use local testing database connection
update_db_metadata_test_job = update_db_metadata.to_job(
    name='update_db_metadata_test_job',
    resource_defs={"database": database_connection},
    config=config_from_files(
        [file_relative_path(__file__, "../config/run_config_test.yaml")]
    ),
)

# Production job - will use production database connection
update_db_metadata_prod_job = update_db_metadata.to_job(
    name='update_db_metadata_prod_job',
    resource_defs={"database": database_connection},
    config=config_from_files(
        [file_relative_path(__file__, "../config/run_config_prod.yaml")]
    ),
)
In the Dagit UI, I can then run either the testing or the production job (this isn't deployed yet either). Is this the recommended way to handle connecting to different DB environments in testing vs production? Or how else?
🤖 1
d
yeah that seems fine (as in
<http://update_db_metadata.to|update_db_metadata.to>_job
) alternatively can switch out the resources instead of the configs: https://docs.dagster.io/guides/dagster/graph_job_op#a-pipeline-with-prod-and-dev-modes
m
Great, thank you.