https://dagster.io/ logo
#ask-community
Title
# ask-community
e

Edson Henrique

05/17/2023, 2:25 PM
hello everyone today in the morning one of my workflows didnt worked, but i think that maybe the problem is with dagster, because nothing changed in the code in the last 6 days it keep the process running but dont show anything
D 1
🟢 1
the op is just
running locally it throws me this error
idk why it didn't appeared in the dagster's cloud UI
changing the support context here: anyone already faced this error?
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host <http://storage.googleapis.com:443|storage.googleapis.com:443> ssl:default [Network is unreachable]
t

Tim Castillo

05/17/2023, 4:48 PM
You mentioned that locally it throws the SSL error. If you go into the log (compute logs or not) for the failed runs, are you getting the same exceptions or something else? Asking because this likely looks like a proxy or firewall issue for the local case. Knowing if you got the same or similar error in Dagster Cloud would help us diagnose
e

Edson Henrique

05/17/2023, 7:18 PM
@Tim Castillo on the cloud it doesnt returned anything, but the job keep running forever. locally it throws the ssl.
but its strange because the code not changed and the pipeline worked well until yesterday 5 pm.
t

Tim Castillo

05/17/2023, 7:20 PM
Hmm, are you using Hybrid?
e

Edson Henrique

05/17/2023, 7:21 PM
serverless deployment
i'm using the same service account in everything on dagster, the bigquery jobs runs without problems, but this job that interacts with gcs is stucked
i never saw this lol
t

Tim Castillo

05/18/2023, 2:06 PM
Let me check in with the team to see if we can figure this out! Thanks for your patience
e

Edson Henrique

05/18/2023, 2:10 PM
idk if its a dagster problem, because i use pandas read_csv(gcs_path) and this error comes from pandas
but what doesnt make sense is that it just stopped to work
curl seems to work
hey @Tim Castillo, just changed the read_csv method and it worked before:
Copy code
df = pd_read.csv(f"gcs://{bucket}/{path}", sep = ';')
after:
Copy code
blob = bucket.blob(path)
data = blob.download_as_bytes()
df = pd.read_csv(io.BytesIO(data),  sep=';')
2 Views