Hello again, I am still struggling with the advanc...
# ask-community
g
Hello again, I am still struggling with the advanced deployment with Celery. I think I have a similar error to this person: https://github.com/dagster-io/dagster/discussions/4282 The following configuration:
Copy code
imagePullSecrets:
  - name: prod-wot-registry-secrets

dagster-user-deployments:
  enabled: true
  imagePullSecrets:
    - name: prod-wot-registry-secrets
  deployments:
    - name: "wot-dagster-repository"
      image:
        repository: "<path-to-my-repo-on-the-registry>"
        tag: latest
        pullPolicy: Always
      dagsterApiGrpcArgs:
        - "-f"
        - "/app/repository/repository.py"
      port: 3030
      envSecrets:
        - name: prod-wot-storage-secrets

runLauncher:
  type: CeleryK8sRunLauncher

redis:
  enabled: true
  internal: true
  host: "wot-dagster-redis-master"
  usePassword: false

flower:
  enabled: true

dagsterDaemon:
  runCoordinator:
    enabled: true
and get the following error:
Copy code
dagster.core.errors.DagsterInvalidConfigError: Errors whilst loading configuration for {'instance_config_map': Field(, default=@, is_required=True), 'postgres_password_secret': Field(, default=@, is_required=False), 'dagster_home': Field(, default=/opt/dagster/dagster_home, is_required=False), 'load_incluster_config': Field(, default=True, is_required=False), 'kubeconfig_file': Field(, default=None, is_required=False), 'broker': Field(, default=@, is_required=False), 'backend': Field(, default=rpc://, is_required=False), 'include': Field(, default=@, is_required=False), 'config_source': Field(, default=@, is_required=False), 'retries': Field(, default={'enabled': {}}, is_required=False)}.
    Error 1: Post processing at path root:instance_config_map of original value {'env': 'DAGSTER_K8S_INSTANCE_CONFIG_MAP'} failed:
dagster.config.errors.PostProcessingError: You have attempted to fetch the environment variable "DAGSTER_K8S_INSTANCE_CONFIG_MAP" which is not set. In order for this execution to succeed it must be set in this environment.

Stack Trace:
  File "/usr/local/lib/python3.9/site-packages/dagster/config/post_process.py", line 77, in _post_process
    new_value = context.config_type.post_process(config_value)
  File "/usr/local/lib/python3.9/site-packages/dagster/config/source.py", line 42, in post_process
    return str(_ensure_env_variable(cfg))
  File "/usr/local/lib/python3.9/site-packages/dagster/config/source.py", line 16, in _ensure_env_variable
    raise PostProcessingError(

    Error 2: Post processing at path root:postgres_password_secret of original value {'env': 'DAGSTER_K8S_PG_PASSWORD_SECRET'} failed:
dagster.config.errors.PostProcessingError: You have attempted to fetch the environment variable "DAGSTER_K8S_PG_PASSWORD_SECRET" which is not set. In order for this execution to succeed it must be set in this environment.

Stack Trace:
  File "/usr/local/lib/python3.9/site-packages/dagster/config/post_process.py", line 77, in _post_process
    new_value = context.config_type.post_process(config_value)
  File "/usr/local/lib/python3.9/site-packages/dagster/config/source.py", line 42, in post_process
    return str(_ensure_env_variable(cfg))
  File "/usr/local/lib/python3.9/site-packages/dagster/config/source.py", line 16, in _ensure_env_variable
    raise PostProcessingError(
r
are these the logs from your job pod?
1
could you do a
kubectl describe
of your dagit pod and user code deployment?
for celery execution, these env variables are passed down to the job by configmap - if you check your execution config, you should supply the
dagster-pipeline-env
configmap which has these variables (it’s created for you in the helm chart)
e.g.
Copy code
execution:
  celery-k8s:
    config:
      env_config_maps:
      - dagster-pipeline-env
      ... other config
❤️ 2
sheesh 1
b
Ah ok,
Copy code
execution:
  celery-k8s:
    config:
      env_config_maps:
      - our-dagster-pipeline-env
      - some-config
      env_secrets:
      - some-secret
      - another-secret
resources:
  io_manager:
    config:
      adls2_file_system: iomanager
solids:
  load_data_from_blob:
    config:
      blob_name: some.csv
works
At least when I run it in the playground
Do we have to set those secrets every time we trigger a run then? I sort of feel like it should be possible to have them set in the helm values
Thank you Rex! Super useful. It feels like there is maybe a small gap in the docs about this, would it be worth me raising this as an issue?
g
Thank you very much 🙂
r
yes please @Billie Thompson - thanks for surfacing