I am running into a weird issue and this is probab...
# ask-community
j
I am running into a weird issue and this is probably a python - pandas issue instead of a dagster issue and the only way I could get around it in pyhton was return_type being arrow and use connectorx. I have a table that has about 20 million rows in it and I am trying to hash the rows and upload them to snowflake but it is maxing out the memory (32 GB) and crashing the operation. To speed this up I went to connectorx and using arrows it now takes 2 minutes to pull and only uses 11GB of memory vs the 32GB that pandas was using but I canot figure out how to use it in dagster since it is requiring me to us Pandas. If anyone has a workaround and/or any advice I would love to hear it.