Fogive the basic question, but import pain is driv...
# announcements
j
Fogive the basic question, but import pain is driving me nuts:
Copy code
WORKS:
dagster pipeline execute -f module1/submodule1/submodule2/submodule3.py
                         -a my_fancy_repo 
                         -p my_fancy_pipeline
                         --mode some_mode
                         -c config.yaml

IMPORT ERROR: no module named module1
dagster pipeline execute -m module1.submodule1.submodule2.submodule3
                         -a my_fancy_repo 
                         -p my_fancy_pipeline
                         --mode some_mode
                         -c config.yaml
Running from the root folder containing
module1
. All cross imports in submodules are of the format
from module1.submodule1.... import a, b, c
. Tearing my hair out trying to understand why the behaviour is different. Tried to use
-d
but get
Error: Invalid set of CLI arguments for loading repository/pipeline. See --help for details.
Any ideas much appreciated!
m
@prha
p
Strange that
-d
is giving you that error. Can you confirm what version of dagster you’re on?
j
Sure, 0.9.1
p
ah right.
j
Need to update?
p
No, this is because we disabled using the current working directory to resolve modules.
We should consider allowing you to explicitly specify the working directory though
Do you mind filing an issue on this?
j
Sure thing.
p
Thank you for the detailed report!
j
Just to clarify, would you expect the call using the module flag to work on its own, or will it require setting the directory flag too?
p
Not quite sure yet… ideally it would work on its own, but this has downstream effects on how the scheduled executions are stored and run
Would want to avoid creating multiple copies of the same schedule, just invoked from different directories
j
Great, yea my confusion mainly steps from the fact that running vanilla python modules with
-m
has been my safe haven from import hell!
👍 1
Thanks for the support, and thanks for Dagster!