Hi Team, getting this error from an independent gr...
# ask-community
j
Hi Team, getting this error from an independent grpc code server:
Copy code
dagster._core.errors.DagsterLaunchFailedError: Error during RPC setup for executing run: sqlalchemy.exc.ArgumentError: Column expression, FROM clause, or other columns clause element expected, got [Column('run_body', Text(), table=<runs>), Column('status', String(length=63), table=<runs>)]. Did you mean to say select(Column('run_body', Text(), table=<runs>), Column('status', String(length=63), table=<runs>)
The only issue I can see is the Alchemy version is different in the daemon and the grpc code server, but I didn't think that could be an issue (was not expecting communication between the code server and the database). Thoughts? Thanks
d
Hi Jose - we’re pushing out a pin for the sqlalchemy version today, I suspect downgrading sqlalchemy to 1.x will resolve this. The code server communicates with the database when you launch a run using the default run launcher, since that’s where the run executes
j
Hi Daniel, thanks for the prompt response, in that case I would expect it, but this is an independent grpc server, that is declared in the workspaces, it should not talk to the db at all right?
just to be sure I understand the software architecture
d
Your choice of run launcher would determine where the run happens - whether it’s a grpc server that you run yourself or not wouldn’t make a difference, the default run launcher does the run in a subprocess on the gRPC server where the job was loaded from
j
I see, I was expecting on the fully remote ones (with its own pytho versoin/env) to only communicate via grpc, the the package version would not matter as long as the grpc protocol is ok
d
Where were you expecting the run would execute? it needs to be in an environment that has access to your code (and by design, system processes like dagit and the daemon never load your code) Or were you expecting the code executes on the gRPC server but communicates events back to a system process over gRPC that then does the actual db write?
j
I was expecting the grpc remote server to talk to the daemon and the daemon doing all the db interactions
exactly, the latter
d
Got it - definitely pros and cons there... having the daemon act as the central command and control is definitely a viable option (airflow works that way). A big con though is that having everything route through the daemon makes it a single point of failure and requires it to be able to scale up to handle all the runs that are happening at once. Neither is clearly better though
1
m
Hi guys! We are running Dagster in K8s, deamon separated from deployments in different namespaces to provide a proper level of credentials isolation. And it is a bit "strange" that we have to share Dagster DB credentials to deployments namespace. So this is strictly related to @Jose Estudillo’s observation. Though I understand @daniel’s arguments.
2
d
Another option would be to have them write to dagit (or some other server that can autoscale with load) via graphql