Hello! I had a trouble implementing the method mak...
# ask-community
e
Hello! I had a trouble implementing the method make_valid from the module shapely.validation on one of my classes, after re-installing it the dagster server started to show some problems to execute the operations that involved using that class, so i decided to restart the server and got the following error, while loading the repository.py file: Exception: gRPC server exited with return code 1 while starting up with the command: "C:/ProgramData/Anaconda3/envs/agrow-analytics/python.exe -m dagster api grpc --lazy-load-user-code --port 60669 --heartbeat --heartbeat-timeout 45 --fixed-server-id 5aa12a0d-9539-4f03-93e9-fff64ee8dcbb --log-level WARNING --use-python-environment-entry-point -f C:/Agrow Programs/dagster/repository.py -d C:/Agrow Programs/dagster/" File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\core\host_representation\grpc_server_registry.py", line 209, in _get_grpc_endpoint startup_timeout=self._startup_timeout, File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\grpc\server.py", line 1105, in init startup_timeout=startup_timeout, File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\grpc\server.py", line 1054, in open_server_process_on_dynamic_port startup_timeout=startup_timeout, File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\grpc\server.py", line 1023, in open_server_process wait_for_grpc_server(server_process, client, subprocess_args, timeout=startup_timeout) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\grpc\server.py", line 964, in wait_for_grpc_server f"gRPC server exited with return code {server_process.returncode} while starting up with the command: \"{' '.join(subprocess_args)}\"" It's a thing related to the conda environment that we are using, but I'm not able to figure what could have happened. I really appreciate if you could help me to solve this problem. I'm also adding the current dependencies installed on the conda environment that we are using.
d
Hi Eduardo - if you run that command in the error message yourself, does it fail with a more useful error?
The "C:/ProgramData/Anaconda3/envs/agrow-analytics/python.exe -m dagster api grpc --lazy-load-user-code --port 60669 --heartbeat --heartbeat-timeout 45 --fixed-server-id 5aa12a0d-9539-4f03-93e9-fff64ee8dcbb --log-level WARNING --use-python-environment-entry-point -f C:/Agrow Programs/dagster/repository.py -d C:/Agrow Programs/dagster/"
e
yes, I'll send you the response
Traceback (most recent call last): File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\runpy.py", line 183, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\runpy.py", line 142, in _get_module_details return _get_module_details(pkg_main_name, error) File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\runpy.py", line 109, in _get_module_details __import__(pkg_name) File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\__init__.py", line 5, in <module> from dagster.core.definitions import ( File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\__init__.py", line 1, in <module> from .config import ConfigMapping File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\config.py", line 6, in <module> from dagster.primitive_mapping import is_supported_config_python_builtin File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\primitive_mapping.py", line 7, in <module> from .core.types.dagster_type import Any as RuntimeAny File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\types\dagster_type.py", line 10, in <module> from dagster.core.definitions.events import TypeCheck File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\events.py", line 11, in <module> from .event_metadata import EventMetadataEntry, last_file_comp, parse_metadata File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\event_metadata\__init__.py", line 9, in <module> from .table import TableColumn, TableColumnConstraints, TableConstraints, TableRecord, TableSchema File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\event_metadata\table.py", line 160, in <module> _DEFAULT_TABLE_CONSTRAINTS = TableConstraints(other=[]) File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\utils\backcompat.py", line 181, in _inner return fn(*args, **kwargs) File "C:\ProgramData\Anaconda3\envs\agrow-analytics\lib\site-packages\dagster\core\definitions\event_metadata\table.py", line 152, in new return super(TableConstraints, cls).__new__( TypeError: super() argument 1 must be type, not function
d
Huh, that's a strange error... what version of Python are you using?
not a windows/conda expert but "C:/ProgramData/Anaconda3/envs/agrow-analytics/python.exe --version" might work to say what version it is
e
its Python 3.7.1
d
Is it possible to try in a fresh conda environment, or is that annoying? It seems like this one has gotten into a pretty strange state
where it can't even import dagster
e
Hi, after reinstalling a few dependencies, dagster workspace finally managed to load the repository.py but now, when i try to "Launch run" one of the defined jobs I get the following message: "dagster.core.errors.DagsterLaunchFailedError: Error during RPC setup for executing run: dagster.check.CheckError: Failure condition: Couldn't import module dagster.core.storage.local_compute_log_manager when attempting to load the configurable class dagster.core.storage.local_compute_log_manager.LocalComputeLogManager File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster_graphql\implementation\utils.py", line 34, in _fn return fn(*args, **kwargs) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster_graphql\implementation\execution\launch_execution.py", line 17, in launch_pipeline_execution return _launch_pipeline_execution(graphene_info, execution_params) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster_graphql\implementation\execution\launch_execution.py", line 51, in _launch_pipeline_execution run = do_launch(graphene_info, execution_params, is_reexecuted) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster_graphql\implementation\execution\launch_execution.py", line 39, in do_launch workspace=graphene_info.context, File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\core\instance\__init__.py", line 1546, in submit_run SubmitRunContext(run, workspace=workspace) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\core\run_coordinator\default_run_coordinator.py", line 32, in submit_run self._instance.launch_run(pipeline_run.run_id, context.workspace) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\core\instance\__init__.py", line 1610, in launch_run self._run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace)) File "c:\programdata\anaconda3\envs\dagster\lib\site-packages\dagster\core\launcher\default_run_launcher.py", line 108, in launch_run res.message, serializable_error_info=res.serializable_error_info"
d
Hm it seems like your environment may still be in a weird state... it seems like when it opens a subprocess in Python, it can no longer import the dagster module, even though the parent process presumably was able to import dagster or it wouldn't have been able to load
123 Views