I'm attempting to materialize a DAG with ~500 node...
# ask-community
s
I'm attempting to materialize a DAG with ~500 nodes and postgres is complaining about the initial kickoff. Is this a known issue? Also, is there a set of docs/practices for large-scale deployments?
Copy code
sqlalchemy.exc.OperationalError: (psycopg2.errors.ProgramLimitExceeded) index row size 3424 exceeds btree version 4 maximum 2704 for index "idx_run_tags"
DETAIL: Index row references tuple (0,27) in relation "run_tags".
HINT: Values larger than 1/3 of a buffer page cannot be indexed.
Consider a function index of an MD5 hash of the value, or use full text indexing.

[SQL: INSERT INTO run_tags (run_id, key, value) VALUES (%(run_id)s, %(key)s, %(value)s)]
[parameters: ({'run_id': 'f09f8a7d-5501-4985-8c88-3046e52e43b9', 'key': '.dagster/repository', 'value': '__repository__@user-code-location'}, {'run_id': 'f09f8a7d-5501-4985-8c88-3046e52e43b9', 'key': 'dagster/partition', 'value': '2023-03-12'}, {'run_id': 'f09f8a7d-....
As a hacky workaround, I deleted the index and the error resolved
❤️ 1