Hello today I'm having problems deploying to produ...
# dagster-plus
m
Hello today I'm having problems deploying to production is it something on my end or changes on your side ?
Copy code
Copy
dagster._core.errors.DagsterUserCodeUnreachableError: User code server request timed out due to taking longer than 60 seconds to complete.

  File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 674, in _update_location_data
    upload_location_data=self._get_upload_location_data(
  File "/dagster-cloud/dagster_cloud/workspace/user_code_launcher/user_code_launcher.py", line 586, in _get_upload_location_data
    external_repository_chunks = list(
  File "/dagster/dagster/_grpc/client.py", line 346, in streaming_external_repository
    for res in self._streaming_query(
  File "/dagster/dagster/_grpc/client.py", line 184, in _streaming_query
    self._raise_grpc_exception(
  File "/dagster/dagster/_grpc/client.py", line 135, in _raise_grpc_exception
    raise DagsterUserCodeUnreachableError(

The above exception was caused by the following exception:
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
	status = StatusCode.DEADLINE_EXCEEDED
	details = "Deadline Exceeded"
	debug_error_string = "UNKNOWN:Deadline Exceeded {grpc_status:4, created_time:"2023-07-21T13:31:38.611766092+00:00"}"
>

  File "/dagster/dagster/_grpc/client.py", line 180, in _streaming_query
    yield from self._get_streaming_response(
  File "/dagster/dagster/_grpc/client.py", line 169, in _get_streaming_response
    yield from getattr(stub, method)(request, metadata=self._metadata, timeout=timeout)
  File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 475, in __next__
    return self._next()
  File "/usr/local/lib/python3.10/site-packages/grpc/_channel.py", line 881, in _next
    raise self
j
hey Marek whats your org name?
I am not aware of any ongoing incident
m
sudolabs
Looks like github has ongoing incident https://www.githubstatus.com/incidents/6503rcn8s34s
j
@Marek Vigaš Im seeing that your GRPC servers are starting but are kicking off non-isolated runs as soon as they init and are currently cpu throttled
There might have been something in the change you deployed that is causing that?
m
well I'm trying to migrate to 1.4
so I changed AutoMaterializePolicy
j
ah that might be it
I can file a cpu request increase for your org but it won't be released till next week
m
Sorry I should have mentioned that I'm trying to migrate 😄 but locally it starts fine
j
no worries! sorry for this opaque error
To rule out the github issue I'd recommend reverting to the non 1.4 code and ill go ahead and file a cpu increase for your org
m
But I tried to disable the Auto-materializing daemon
j
hmmm, its possible that maybe you are invoking materialize all assets directly in some of your changes?
I'm seeing an asset job attempted to get kicked off immediately after the server starts
m
I have freshness policies that are probably overdue now
j
Copy code
non_isolated_runs:
  enabled: False
do you have non isolated runs enabled?
(this might be an interesting gotcha with serverless non-isolated + 1.4 freshness policies)
m
do you have non isolated runs enabled?
I don't think so, where can I check ?
here is my dagster cloud config
j
I think you have them enabled, i forget if the default value is true
If you're willing 1. reverting to a working state 2. setting
enabled: False
3. deploying the 1.4 upgrade
👍 1
would be informative
you can view the deployment settings by clicking the gear icon
m
I'm on it
🚀 1
I got it to working state, run some assets, set the run isolation to false, now I'm deploying the 1.4 migration
got the same error 😕
m
yes
a
kk 1.4.1 with the fix is in the release pipeline should be available in the next hour or two
🚀 1
m
Good to know, I will wait for 1.4.1 and test it today
I got it working with 1.4.2