Hello guys! We are facing the problem of our code location k8s pod having these spikes in memory usage. Jobs themselves run each on a separate pod so this one have to be the sensors which actually trigger the runs. We have made sure we close all the resource connections after each sensor run but nothing really changed 😢
For now it seems to somehow do the "garbage collection" at ~97% of memory usage and has never lead to crush the pipeline, but we are trying to understand something can be done here or it's a perfectly normal behavior. Thank you!