The auto materialization doesnt like me (running 1...
# ask-community
The auto materialization doesnt like me (running 1.3.3 but noting this was happening on 1.3.2 as well) and seems to die. It will do this for a while and then the entire dagster daemon just shuts it self down
Copy code
dagster._core.errors.DagsterUserCodeUnreachableError: Could not reach user code server. gRPC Error code: UNAVAILABLE
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_daemon\", line 222, in core_loop
    yield from self.run_iteration(workspace_process_context)
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_daemon\", line 98, in run_iteration
    external_job = code_location.get_external_job(
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_core\host_representation\", line 139, in get_external_job
    subset_result = self.get_subset_external_job_result(selector)
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_core\host_representation\", line 761, in get_subset_external_job_result
    return sync_get_external_job_subset_grpc(
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_api\", line 29, in sync_get_external_job_subset_grpc
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_grpc\", line 291, in external_pipeline_subset
    res = self._query(
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_grpc\", line 157, in _query
  File "c:\dagster-warehouse-fEr6haF0-py3.10\lib\site-packages\dagster\_grpc\", line 140, in _raise_grpc_exception
    raise DagsterUserCodeUnreachableError(
@owen @johann - mind taking a look?
How do you have dagster deployed?
dagster dev just on my local machine
It may be that your local process is running out of memory or something like that. Did auto materializing create a lot of runs on your deployment?
it wont even start any runs 😞
I'll see if I can get a box with bigger memory (my apologies for the delayed response too, slack is blocked at my workplace. its sometimes hard to remember to check here after a long day at work)
No worries. There’s a couple things that can cause that error, I’ll describe what’s going on behind the scenes: •
dagster dev
is spinning up subprocesses to load your asset code. Dagit and the daemon processes communicate with them via gRPC • This gRPC server isn’t responding for whatever reason. In particular, it’s stopped responded while the daemon is fetching info about the assets it’s decided to launch due to your AutoMaterializePolicies So my main guess would be that the gRPC server OOMed or just got overloaded. Is it consistently raising this error with the same stack trace (on
external_job = code_location.get_external_job(
in particular)?
I've removed most of the auto-mats, and just slowly adding them back in (while trying to arrange a bigger machine to run it on) but yes, same line
@Harrison Conlin - I'm curious how this ended up turning out for you?
@sandy yo, sorry for the delay, I suspect part of the problem was to do with the number of assets I have with dynamic partitions. For unrelated reasons, I'm removing all the "intermediate assets" with dynamic partitions and it seems to be behaving better keeping in mind, I work in government, our developer boxes are windows box with multiple security running on it products on it, so half my 16gb of ram and at least 30% of the cpu is taken by them. It struggles at times 🙂)
got it - thanks for the update! so are you currently using auto-materialize policies? any feedback on them in general?
I am, they're great, the only feedback I have is the lack of visibility into why a run was (or perhaps was not triggered) but thats improved with one of the recent updates.