hi, I'm migrating over from the old pipelines api ...
# deployment-kubernetes
w
hi, I'm migrating over from the old pipelines api to the new jobs one, and am running into issues with configuring the celery k8s job executor properly (I'm trying to spin up the job in a specific namespace, but it seems to be going to the default namespace) -- details in thread
the old run config would include
Copy code
execution:
  celery-k8s:
    config:
      image_pull_policy: "Always"
      env_config_maps:
        - "dagster-pipeline-env"
      env_secrets:
        - "dagster-pipeline-secrets"
      repo_location_name: "main"
      job_namespace: etl
      service_account_name: dagster-pipeline
now i'm using
executor_def=
Copy code
celery_k8s_job_executor.configured({
        "job_namespace": "etl",
        "image_pull_policy": "Always",
        "env_config_maps": ["dagster-pipeline-env"],
        "env_secrets": ["dagster-pipeline-secrets"],
        "repo_location_name": "main",
        "service_account_name": "dagster-pipeline",
    })
but dagster appears to be trying to create the jobs in the
default
namespace still:
Copy code
obs.batch is forbidden: User \"system:serviceaccount:etl:dagster\" cannot create resource \"jobs\" in API group \"batch\" in the namespace \"default\"
d
Hi William - unfortunately the celery k8s executor doesn't work well when the executor definition uses .configured (with either the old or the new APIs) - we’re hoping to improve this, but in the meantime you'll want to instead set the config as the default config on the job (rather than using .configured), so that it is still set in the launchpad config rather than being configured in code
w
ah, got it -- thanks!