Ramkumar KB
05/07/2023, 2:35 PM1.3.3
with Conda and Windows to run locally. The Conda is set up correctly. I am trying to run the hello_dagster.py
from the Dagster docs, but no luck thus far.
When I try this - dagster dev --code-server-log-level DEBUG -p 6050 -f hello_dagster.py
- Dagster starts & shutsdown immediately. However, when I try this - dagster dev --code-server-log-level trace -p 6050 -f hello_dagster.py
- the Dasgster at least comes up! But the code does not execute with an error on the GRPC-Server.
It is rather strange that DEBUG
& trace
in the command can make so much difference...
Any tips or pointers would be really helpful. Thanks again.
DEBUG
===
(dagster_demo) C:\some\path>dagster dev --code-server-log-level DEBUG -p 6050 -f hello_dagster.py
2023-05-07 22:08:24 +0800 - dagster - INFO - Launching Dagster services...
2023-05-07 22:08:38 +0800 - dagster - INFO - Started Dagster code server for file hello_dagster.py on port 52818 in process 12132
2023-05-07 22:08:38 +0800 - dagster - INFO - Started Dagster code server for file hello_dagster.py on port 52820 in process 980
2023-05-07 22:08:44 +0800 - dagster - INFO - Shutting down Dagster services...
2023-05-07 22:08:44 +0800 - dagster - INFO - Dagster services shut down.
trace
===
Exception: gRPC server exited with return code 1 while starting up with the command:
<stack trace>
lib\site-packages\dagster\_core\host_representation\grpc_server_registry.py", line 194, in _get_grpc_endpoint
server_process = GrpcServerProcess(...
Ramkumar KB
05/08/2023, 3:52 AMgrpc Code Server
separately and then tired to connect the Dasgter - but when I do that both the Code Server
and Dagster
exit now -
GRPC Code Server
===
dagster api grpc --python-file hello_dagster.py --host 0.0.0.0 --port 7001
Dagster Services
===
dagster dev --code-server-log-level trace -w workspace.yaml
Ramkumar KB
05/08/2023, 4:09 AMgrpc
servers exits but it seems to be ok -
(dagster_demo) C:\some\path\miniforge3\envs\dagster_demo>python.exe -m dagster api grpc --lazy-load-user-code --port 52707
Ramkumar KB
05/08/2023, 12:07 PM3.9, 3.10, 3.11
- still no luck. Still the same - Exception: gRPC server exited with return code 1 while starting up with the command:
sean
05/08/2023, 4:28 PMHowever, when I try this - dagster dev --code-server-log-level trace -p 6050 -f hello_dagster.py - the Dasgster at least comes up! But the code does not execute with an error on the GRPC-Server.What did you mean here by “the Dagster at least comes up”?
Ramkumar KB
05/09/2023, 4:15 AM--code-server-log-level debug
starts and shuts down the Dagster, whilst the `--code-server-log-level trace`` bring the Dagster UI. In the UI i can see a message saying that the grpc server exited with 1
. Change of log-levels has an impact on how the Dagster behaves?Ramkumar KB
05/09/2023, 4:26 AM(dagster_39) C:\some\path\dagster_poc\hello_dagster>dagster dev --code-server-log-level info -p 4001 -f hello-dagster.py
2023-05-09 12:22:11 +0800 - dagster - INFO - Launching Dagster services...
2023-05-09 12:22:27 +0800 - dagster - INFO - Started Dagster code server for file hello-dagster.py on port 53588 in process 18928
2023-05-09 12:22:27 +0800 - dagster - INFO - Started Dagster code server for file hello-dagster.py on port 53590 in process 24708
2023-05-09 12:22:31 +0800 - dagster - INFO - Shutting down Dagster services...
2023-05-09 12:22:31 +0800 - dagster - INFO - Dagster services shut down.
Ramkumar KB
05/09/2023, 4:35 AMtrace
level in the log -
C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_core\workspace\context.py:589: UserWarning: Error loading repository location hello-dagster.py:Exception: gRPC server exited with return code 1 while starting up with the command: "C:\Users\someuser\apps\miniforge3\envs\dagster_39\python.exe -m dagster api grpc --lazy-load-user-code --port 53139 --heartbeat --heartbeat-timeout 120 --fixed-server-id c4d859cc-f13f-49c3-859d-3a431989701f --log-level trace --inject-env-vars-from-instance --instance-ref {"__class__": "InstanceRef", "compute_logs_data": {"__class__": "ConfigurableClassData", "class_name": "LocalComputeLogManager", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\\storage\n", "module_name": "dagster.core.storage.local_compute_log_manager"}, "custom_instance_class_data": null, "event_storage_data": {"__class__": "ConfigurableClassData", "class_name": "SqliteEventLogStorage", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\\history\\runs\\\n", "module_name": "dagster.core.storage.event_log"}, "local_artifact_storage_data": {"__class__": "ConfigurableClassData", "class_name": "LocalArtifactStorage", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\n", "module_name": "dagster.core.storage.root"}, "run_coordinator_data": {"__class__": "ConfigurableClassData", "class_name": "DefaultRunCoordinator", "config_yaml": "{}\n", "module_name": "dagster.core.run_coordinator"}, "run_launcher_data": {"__class__": "ConfigurableClassData", "class_name": "DefaultRunLauncher", "config_yaml": "{}\n", "module_name": "dagster"}, "run_storage_data": {"__class__": "ConfigurableClassData", "class_name": "SqliteRunStorage", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\\history\\\n", "module_name": "dagster.core.storage.runs"}, "schedule_storage_data": {"__class__": "ConfigurableClassData", "class_name": "SqliteScheduleStorage", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\\schedules\n", "module_name": "dagster.core.storage.schedules"}, "scheduler_data": {"__class__": "ConfigurableClassData", "class_name": "DagsterDaemonScheduler", "config_yaml": "{}\n", "module_name": "dagster.core.scheduler"}, "secrets_loader_data": null, "settings": {"code_servers": {"local_startup_timeout": 120}, "telemetry": {"enabled": false}}, "storage_data": {"__class__": "ConfigurableClassData", "class_name": "DagsterSqliteStorage", "config_yaml": "base_dir: C:\\Users\\someuser\\dev\\pythonprojects\\dagster_poc\\dagster_home2\n", "module_name": "dagster.core.storage.sqlite_storage"}} --location-name hello-dagster.py -f hello-dagster.py -d C:\Users\someuser\dev\pythonprojects\dagster_poc\hello_dagster"
Stack Trace:
File "C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_core\host_representation\grpc_server_registry.py", line 194, in _get_grpc_endpoint
server_process = GrpcServerProcess(
File "C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_grpc\server.py", line 1281, in __init__
server_process, self.port = _open_server_process_on_dynamic_port(
File "C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_grpc\server.py", line 1214, in _open_server_process_on_dynamic_port
server_process = open_server_process(
File "C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_grpc\server.py", line 1185, in open_server_process
wait_for_grpc_server(server_process, client, subprocess_args, timeout=startup_timeout)
File "C:\Users\someuser\apps\miniforge3\envs\dagster_39\lib\site-packages\dagster\_grpc\server.py", line 1119, in wait_for_grpc_server
raise Exception(
Ramkumar KB
05/09/2023, 5:27 AMtelemetry
to false
in the dagster.yaml
- Does the grpc
server need a proxy to start ?Ramkumar KB
05/09/2023, 8:37 AMset no_proxy=127.0.0.1
environment variable (however that seems to be for a separate instances of the code server & dagster) - anyway I set that also if that makes any difference but still no luck.
I tried this in another machine (outside the corporate wall) - dagster dev --code-server-log-level debug -f hello-dagster.py
- this came up properly, however - dagster dev --code-server-log-level trace -f hello-dagster.py
- gave the same grpc
server error - gRPC server exited with return code 1 while starting up with the command:
Still confused why changing the log-levels of the code server would have such an impact to the outcomes... Any tips here would be super helpful!sean
05/09/2023, 11:46 AMStill confused why changing the log-levels of the code server would have such an impact to the outcomes... Any tips here would be super helpful!What’s going on here is that
trace
is actually not a valid log level and an error is being thrown, which you should be able to find in the backtrace:
dagster._check.CheckError: Invariant failed. Description: Bad value for log level trace: permissible values are 'INFO', 'CRITICAL', 'WARN', 'WARNING', 'DEBUG', 'FATAL', 'ERROR'.
This is causing the grpc process to be instantly terminated. The dagster dev
command actually launches two processes (the grpc and the webserver) and the webserver is continuing to launch normally, but then can’t find the grpc process. No doubt we should handle this better (an invald log level or instantly failng grpc should also shut down the webserver). I will open an issue.
Since this seems to be an issue with your corporate firewall, I am going to reach out to some team members more knowledgable about networking stuff.Ramkumar KB
05/09/2023, 1:46 PMgRPC
Code Server and Dagster UI in separate processes - but they both shutdown as soon as the Dagster UI tries to come up. There are no logs anywhere to find why this is happening. I am trying the Dagster OSS
- so ideally, it should be nothing with the coporate firewall -
1st Process - gRPC Code Server (standalone)
(dagster_39) C:\Users\someuser\dev\pythonprojects\dagster_poc\hello_dagster>dagster api grpc --python-file hello-dagster-cs.py --port 7001
2023-05-09 21:14:30 +0800 - dagster - INFO - Started Dagster code server for file hello-dagster-cs.py on port 7001 in process 20528
==> This shutsdown with no logs at all, once the 2nd process tries to connect to it
2nd Process - Dagster UI (pointing to a workspace.yaml)
(dagster_39) C:\Users\someuse\dev\pythonprojects\dagster_poc\hello_dagster>dagster dev -w workspace.yaml
2023-05-09 21:15:32 +0800 - dagster - INFO - Launching Dagster services...
2023-05-09 21:15:42 +0800 - dagster - INFO - Shutting down Dagster services...
2023-05-09 21:15:42 +0800 - dagster - INFO - Dagster services shut down.
workspace.yaml
# workspace.yaml
load_from: - grpc_server: host: 127.0.0.1 port: 7001 location_name: "my_grpc_code_server"
Ramkumar KB
05/09/2023, 1:50 PMgRPC Code Server
) - at least it comes up and does not shutdown. But as soon as the Dagster UI tries to connect to it - both the processes shutdown.
These are server processes - am expecting them to throw errors in some logs & continue serving - why shutdown?Ramkumar KB
05/09/2023, 1:58 PMgRPC Code Server
twice and then shutsdown. Is this due to some retry twice and then shutdown?
(dagster_39) C:\Users\someuser\dev\pythonprojects\dagster_poc\hello_dagster>dagster dev --code-server-log-level DEBUG -p 4001 -f hello-dagster-cs.py
2023-05-09 21:53:42 +0800 - dagster - INFO - Launching Dagster services...
2023-05-09 21:53:57 +0800 - dagster - INFO - Started Dagster code server for file hello-dagster-cs.py on port 63598 in process 11680
2023-05-09 21:54:00 +0800 - dagster - INFO - Started Dagster code server for file hello-dagster-cs.py on port 63604 in process 19640
2023-05-09 21:54:02 +0800 - dagster - INFO - Shutting down Dagster services...
2023-05-09 21:54:02 +0800 - dagster - INFO - Dagster services shut down.
sean
05/09/2023, 2:01 PMRamkumar KB
05/09/2023, 2:18 PMRamkumar KB
05/09/2023, 3:58 PMgRPC
server is tested in Windows also...
Thanks again for the help.sean
05/09/2023, 4:14 PMSo what I will do is to test the same in a Windows environment outside our corporate network - to isolate if this a generic Windows issue or a something more specific witn the Corporate Widows env.OK, great. We do test and support windows but I believe that 100% of our developers (and the majority of our users) use unix, so that’s where the best testing and support is.