After switching to `@op` and `@graph` , my GRPC se...
# ask-community
x
After switching to
@op
and
@graph
, my GRPC server in staging box no longer works
Copy code
OSError: [Errno 30] Read-only file system: '/export/content/lid/apps/dagster-web'
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/core/execution/plan/execute_plan.py", line 193, in _dagster_event_sequence_for_step
    for step_event in check.generator(step_events):
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/core/execution/plan/execute_step.py", line 326, in core_dagster_event_sequence_for_step
    for evt in _type_check_and_store_output(step_context, user_event, input_lineage):
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/core/execution/plan/execute_step.py", line 380, in _type_check_and_store_output
    for evt in _store_output(step_context, step_output_handle, output, input_lineage):
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/core/execution/plan/execute_step.py", line 490, in _store_output
    handle_output_res = output_manager.handle_output(output_context, output.value)
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/core/storage/fs_io_manager.py", line 119, in handle_output
    mkdir_p(os.path.dirname(filepath))
  File "/export/content/lid/apps/meeseeks-backend-dagster-grpc-server/i001/libexec/meeseeks-backend_015263caabc6969a26157a73f5dcbb71f6e53ea18d1e4d986685ee6b374c9871/site-packages/dagster/utils/__init__.py", line 150, in mkdir_p
    os.makedirs(path)
  File "/export/apps/python/3.7/lib/python3.7/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/export/apps/python/3.7/lib/python3.7/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/export/apps/python/3.7/lib/python3.7/os.py", line 213, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 4 more times]
  File "/export/apps/python/3.7/lib/python3.7/os.py", line 223, in makedirs
    mkdir(name, mode)
@sandy sorry to ping you directly, i really need some help here. I’m leading my team to convert our stuff onto Dagster but it just gave too many issues today..
/export/content/lid/apps/dagster-web
is the HOME of the Dagit, which is on host `A`; the grpc server is running on the host
B
, which only has the access to
/export/content/lid/apps/meeseeks-backend-dagster-grpc-server
; and it was trying to create a folder under
dagster-web
, which it does not have access to. @daniel helped me previously by asking me to remove the local compute log manager, thus I switched to
NoOpComputeLogManager
, which works perfectly. I have no idea why switching from
@solid
to
@op
would causing grpc server to behave differently.
s
I have a guess about what’s going on here. The new APIs include a change to the default IO manager - it now defaults to the filesystem IO manager instead of the memory IO manager, which allows re-execution and multiprocess execution to work by default.
c
^
s
It will try to store using whatever base directory is specified in your dagster.yaml
x
how do i change it back to the in memory IO manager?
Copy code
resource_defs (Optional[Dict[str, ResourceDefinition]]) – Resources that are required by this graph for execution. If not defined, io_manager will default to filesystem.
s
Instead of including your graph directly in the repo, you can include
<http://my_graph.to|my_graph.to>_job(resource_defs={"io_manager": mem_io_manager}, executor_def=in_process_executor)
x
thank you sandy!
worked with `my_graph.to_job(resource_defs={"io_manager": mem_io_manager}, executor_def=in_process_executor)`` !
s
that is great to hear!