hi team - would anyone know if there have been any...
# ask-community
d
hi team - would anyone know if there have been any recent changes to how context.pdb.set_trace() works? on older versions of dagster, im fairly confident i was able to properly set breakpoints in local dagit instances using that line before, but now i immediately get the following error:
Copy code
The above exception was caused by the following exception:
bdb.BdbQuit
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/site-packages/dagster/_core/execution/plan/utils.py", line 54, in op_execution_error_boundary
    yield
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/site-packages/dagster/_utils/__init__.py", line 439, in iterate_with_context
    next_output = next(iterator)
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/site-packages/dagster/_core/execution/plan/compute_generator.py", line 122, in _coerce_solid_compute_fn_to_iterator
    result = invoke_compute_fn(
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/site-packages/dagster/_core/execution/plan/compute_generator.py", line 116, in invoke_compute_fn
    return fn(context, **args_to_pass) if context_arg_provided else fn(**args_to_pass)
  File "/home/david.tong/repos/dagster_workflows/dagster_workflows/dagster_modules/some_job/ops/new_op.py", line 25, in run_some_job_large
    config = load_yaml("/home/david.tong/repos/dagster_workflows/nav_config.yaml")
  File "/home/david.tong/repos/dagster_workflows/dagster_workflows/dagster_modules/some_job/ops/new_op.py", line 25, in run_some_job_large
    config = load_yaml("/home/david.tong/repos/dagster_workflows/nav_config.yaml")
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/bdb.py", line 88, in trace_dispatch
    return self.dispatch_line(frame)
  File "/home/david.tong/miniconda3/envs/dagster3/lib/python3.8/bdb.py", line 113, in dispatch_line
    if self.quitting: raise BdbQuit
🤖 1
d
Hi David - I think I know the change that likely caused this: https://github.com/dagster-io/dagster/pull/13099 This is a tricky one because there are certain packages that hang if you try to import them and stdin isn't null - but that may now be interfering with pdb. We'll see if we can come up with a solution that satisfies both requirements
d
got it - so for now, should we consider pdb non-functional until a fix is released, or might there be any workarounds?
d
I would expect it to work in CLI calls like
dagster job execute
(or if you run the code in your own grpc server via `dagster api grpc`: https://docs.dagster.io/concepts/code-locations/workspace-files#running-your-own-grpc-server) I would also expect it to still work on the previous release (1.2.3)
The problem only occurs when dagit or dagster dev is the one spinning up a subprocess
d
nice, yea can confirm
context.pdb
w/
dagster job execute
looks functional on 1.2.4. should do for now, thanks!
d
Just a heads up that we believe we fixed this issue in 1.2.6 that went out this week, thanks for reporting it
w
Still occurring for me, @daniel, with 1.2.6 on osx
d
Ah yes, should have updated this thread - I think we have actually fixed this for real in https://github.com/dagster-io/dagster/pull/13525 which is going out in the release today or tomorrow (1.2.7)
w
thanks for the heads up!
great branch name
d
Hey guys. I just had the same problem. I'm wondering if this is still supported? https://docs.dagster.io/_apidocs/utilities#dagster._utils.forked_pdb.ForkedPdb
d
It's still supported - should be working again like it was before after the 1.2.7 release that's rolling out right now
w
I'm still encountering with 1.2.7 on OSX (Py 3.10.3)
d
Can you share your repro steps / was it working on past versions?
oh, hm, that change does not seem to have actually made it into 1.2.7! Sorry for the confusion, that was an oversight on our part
d
😢
d
In the meantime - "dagster job execute" should still work, as should running the code in your own grpc server with "dagster api grpc"
w
that's alright
Is there any way to specify ops from
dagster job execute
? (i'm re-running an expensive computation repeatedly before i can debug the downstream asset)
d
dagster asset materialize
should work too, right?
🌈 1
d
Yeah, the problem only occurs with dagit / dagster dev
d
By the way, are there any docs for using PyCharm's debugger with Dagster? I think there should be a way. I did it with Airflow once, but don't remember the details.
d
That one may be worth a separate post - i'm not sure about docs, but i believe somebody on the team will know
d
Yeah, I think it will by incredibly valuable. I'll spend some time trying to set this up next week
w
@daniel I haven't tested this rigorously, but pdb does appear to be working on asset materializations originated from Dagit.
I'm not sure if its intermittent, but either way I appreciate you and the fix
Although I may have spoken too soon – the read prompt appeared in my terminal but i'm not sure if my input was evaluated and no output was printed.
d
Yeah the issue was related to passing input between subprocesses - the fix is in master and will definitely 100% be in the next release
👍 1