https://dagster.io/ logo
#ask-community
Title
# ask-community
m

martin o leary

09/06/2023, 9:37 AM
Hey all - I'm following this example which uses the
DockerRunLauncher
and using 2 of my own user code images. Both are running the gRPC servers fine so I can see the code locations in dagit however one of the images will execute runs fine and the other won't. The issue is source code related but I have no idea how to access the logs from inside that run instance container. The logging for the run shows that the we get far enough that the docker launch happens:
Copy code
[DockerRunLauncher] Launching run in a new container cb20571c08146c030e69fcf058d3de6ca4a22edaffa7cc6c0f641c3c7d963b41 with image <http://ghcr.io/mycompany/my_second_user_code_image:latest|ghcr.io/mycompany/my_second_user_code_image:latest>
Nothing happens in the UI and I need to cancel the job. I don't see the container running on the host so it obviously fails and gets removed. I have set the run storage, schedule storage and event log storage to save to postgres and the compute logs go to S3 but I can't seem to figure out where to find the logs from inside that launched container before it exits. So: 1. What config can i set on the run_launcher so that I can keep the container around after failure and inspect the logs? 2. Where should those logs end up based on my storage setup?
🤖 1
Ok so explicitly setting auto_remove to false on container_kwargs the run_launcher config allowed me to see the logs after the failed run:
Copy code
run_launcher:
  module: dagster_docker
  class: DockerRunLauncher
  config:
    container_kwargs:
      auto_remove: false
c

claire

09/06/2023, 6:50 PM
Glad that setting that flag to false worked for you! For reference, you can see where the compute logs will be persisted based on your settings in `dagster.yaml`: https://docs.dagster.io/deployment/dagster-instance#compute-log-storage When using the default
LocalComputeLogManager
they will be persisted to disk.
m

martin o leary

09/06/2023, 8:49 PM
Thanks Claire - I’m using S3 for that. The logs never make it to the bucket unfortunately.
c

claire

09/06/2023, 9:15 PM
Hm.. not sure why the logs aren't in the bucket, let me forward your question to the team
m

martin o leary

09/07/2023, 6:31 AM
Thanks Claire, it’s not a big deal for me anyway. It might be interesting for the team if it sounds like a bug - in which case I can open an issue. This is just a stepping stone for us to get to a k8s deployment
j

Joe Van Drunen

09/07/2023, 5:57 PM
by default
S3ComputeLogManager
will only upload on termination, there might have been something preventing it being marked as failed/successful and trigger the log file upload. You can configure an upload interval if you'd like
Copy code
compute_logs:
  module: dagster_aws.s3.compute_log_manager
  class: S3ComputeLogManager
  config:
    upload_interval: 30
D 1
🔥 1
m

martin o leary

09/07/2023, 5:59 PM
Thanks Joe!
2 Views