Hello! I am using a variant of the `execute_k8s_jo...
# ask-community
w
Hello! I am using a variant of the
execute_k8s_job
function in the
dagster-k8s
library, and have also enabled partial compute log uploads using the
S3ComputeLogManager
(every 30 seconds). For my job run, I can see the log file in S3 is updated every 30 seconds as expected — however, it does not always contain the most up-to-date logs when I compare my manual tail of the pod logs with the downloaded file. My guess is that this happens when my op only prints a little bit of logs for a while, the underlying
LocalComputeLogManager
hasn’t flushed the buffered logs to the local pod disk, and the
S3ComputeLogManager
ends up not uploading the latest log file to S3. Could anyone confirm if this is the case? And if so, is there any way to tweak the settings so that we can get latest logs all the time? (for example, only 1 log line in the past 5 minutes)
Some more specifics: I had a job that started with 50 log lines and it did not get uploaded to S3. Only after it reached 100+ did I start seeing logs in the UI
s
I'm facing the same problem. I don't see the logs until my script is complete because of the number of log lines. Were you able to find the cause for it?