Hey all, trying to use a .env file with a docker d...
# ask-community
a
Hey all, trying to use a .env file with a docker deployment and grpc. But my resources can't seem to pickup the environment variables from the .env file, which I have copied into the dagster_home folder for the dagit and daemon container, as well as into the app folder for my code container. How should an environment file be used when dealing with docker deployment? I am using Docker Compose and can add the env variables into the compose file, but I want to supply this docker compose file to other devs and don't want the env information in our repository
d
Hey Aaron - there's a "--inject-env-vars-from-instance" argument you can include in the 'dagster api grpc' command that i believe will help here
a
So this will take the .env file from the app folder in the grpc and inject it into wherever uses the repository?
d
That will specifically cause it to be included in the grpc server container. If you're using the DockerRunLauncher and the .env file is available in whatever image it's using I would expect that to already work without the flag
the other thing is that a dagster.yaml will need to be available on your user code container for that flag to work
a
for the user code container dagster.yaml will need to be in the /opt/dagster/app folder or the dagster_home folder?
d
It can be in any folder but the DAGSTER_HOME env var has to be set to that folder
(this is all pretty clunky, apologies for that - the medium-term plan is to move to a world where dagster is more directly managing these grpc servers and all this configuration is handled for you, like it is when you run dagit locally)
a
for the daemon and dagit image I set the DAGSTER_HOME env var, but not for my code image
No worries on the layout, I actually don't mind this. It makes sense to me, just a bit tricky to get it to behave. Our eventual plan is to use k8, but for local development we are using docker compose
@daniel still running into the issue unless I put the env variables into docker compose. Here is my structure -
Copy code
- code.Dockerfile
|- copy dagster.yaml .env $DAGSTER_HOME

- dagster.Dockerfile
|- copy dagster.yaml workspace.yaml .env $DAGSTER_HOME

- dagster_code_container --> code.Dockerfile image
- dagster_dagit_container --> dagster.Dockerfile image
- dagster_daemon_container --> dagster.Dockerfile image

- dagster.yaml
|- run_launcher --> DockerRunLoauncher
d
Ah it may need the .env file to be in the working directory of the container rather than the dagster_home folder
a
so inside the python module?
d
a
That is where I have it. In the run_loauncher, do I need to specify the env_vars names?
d
you shouldn't, no
is there anything in the logs for the user code contianer that might explain what it's doing? there should be some output about loading env vars
and you'll want to be on a relatively recent dagster version
a
currently having pip install the latest
let me take a look
when trying to get the environment variable from
os.getenv({var_name})
it is returning NoneType from the python module and causing the code container to fail to start
d
there should be a line like this in the logs from the container if there's a .env file in the same folder that the
dagster api grpc
command was run in:
Copy code
(dagster-3.8.9) dgibson@Daniels-MacBook-Pro dagster % dagster api grpc -m dagster_test.toys.repo -p 4000 --inject-env-vars-from-instance
2023-02-21 14:38:49 -0600 - dagster - INFO - Loaded environment variables from .env file: FOO
a
Now I'm getting some postgress env variable errors... need to look into it a bit more
d
right, that might require setting some env vars that you previously only needed to set in dagit / the daemon
a
Got it up and running!
🎉 1
Hey @daniel - thought you might be interested, I came up with a different solution. Since I am using docker compose, I just pass my
.env
file as the
env_file
key in docker compose and it works just as well.
d
that sounds good, although I think it would not work out of the box if you were to start using the DockerRunLauncher that spins up a new container for each run
(since it wouldn't know to pass through that env_file key to the new container)
a
I would need to set that up in the dagster yaml
env_vars
to pass through I guess
thanks for the heads up!
@daniel would the same concept apply to the k8sRunLauncher?
d
k8sRunLauncher is better at letting you configure environment variables and secrets in a single place and having those automatically passed through everywhere - instead of env files you can use a k8s secret or configmap and configure it in the helm chart so that it's loaded whenever that code is loaded
a
I was looking at that. What I am doing is docker compose for local dev, but k8s for prod - does that make sense? I will probably have to maintain multiple dagster.yaml and workspace.yaml files. Probably duplicate the images too
d
that could work although I could imagine running 'dagster dev' either locally or in a single container working too for local dev
the rough edge you ran into with env files and weird args to 'dagster api grpc' aren't really an issue when you run a single 'dagster dev' command, and also aren't an issue in the helm chart
a
Interesting - I need to do local integration testing too, while sort of mocking how things would work when deployed. Our structure looks kind of like dagster -> push to kafka -> flink read from kafka. So I was thinking compose might be a good choice for this, but I wonder if running
dagster dev
instead of 3 containers (gRPC, dagit, daemon) would be adequate. But then I would still need multiple dagster.yaml files?
@daniel For k8sRunLauncher docs here, the job_image would refer to the dagster image, not the code repo? and the dagster image will have the dagster.yaml file to configure the code locations? https://docs.dagster.io/_apidocs/libraries/dagster-k8s#dagster_k8s.K8sRunLauncher
d
I'd recommend using the helm chart if you're runnign on k8s - there you wouldn't need to specify the run launcher config directly
you only need one dagster.yaml file if you're using dagster dev
a
Need to learn kubctl and helm now 😄
Is it possible to be able to use the same image in dev and prod?
when deploying with helm
d
i don't see why not
D 1
a
Thanks for the advice - time to do some reading 👀
Oh sorry, last question for now... 😅 https://docs.dagster.io/deployment/guides/kubernetes/deploying-with-helm#deployment-architecture -- for the dagit and daemon dagster has their own dagster/dagster-k8s image. Any reason not to use that for prod deployments?
d
no reason that i can think of
a
ok, I think that makes sense. dagster dev, then dagster-k8s prod ✔️
🎉 1
Running into an issue using dagster dev - I start my container, expose 3000 and set ports. In the container I run
dagster dev
starts up fine and says
Serving dagit on <http://127.0.0.1:3000> in process 16
. But no access from my desktop.. I am assuming some docker compose issue
d
Could you possibly make a new post for this? Our support oncall can take a look
a
sure no problem - trying a few things first
Got it working