Charles Lariviere
06/30/2021, 2:58 PMk8s_job_executor
but my pipeline fails right after the following event (full logs in thread):
ENGINE EVENT Starting execution with step handler K8sStepHandler
This pipeline works when using the default executor, and the k8s job is recorded as Completed
. I’m not sure where to look at since I’m not really getting any stack trace. The execution is only defined as follows — do I need to pass more than that? Dagit did not raise any errors with this config so I assumed it was correct, since I wanted to use the same as the User Deployment:
execution:
k8s:
Charles Lariviere
06/30/2021, 2:58 PMalex
06/30/2021, 3:02 PMjohann
06/30/2021, 3:05 PMCharles Lariviere
06/30/2021, 3:10 PMCharles Lariviere
06/30/2021, 3:12 PMErrorSource.FRAMEWORK_ERROR
at the end if that’s helpfuljohann
06/30/2021, 3:39 PMdagster-job-<id>
Charles Lariviere
06/30/2021, 3:54 PMdagster-job-*
— I might be looking in the wrong place though? I’m running kubectl get jobs
, but everything starts with dagster-run-*
.Charles Lariviere
06/30/2021, 5:11 PMdagster-job-<id>
errored out before it could get started?johann
06/30/2021, 5:24 PMjohann
07/01/2021, 3:33 PM…
button on the runs pagejohann
07/01/2021, 7:55 PMI’m not finding any pods that start withCould you also check for any jobs other than thedagster-job-*
dagster-run-…
? It’s possible they’re hitting an error creating the pod. Some investigation today revealed a bug that might be causing us to swallow an error hereCharles Lariviere
07/02/2021, 1:25 PMCharles Lariviere
07/02/2021, 1:35 PMdagster-job
! I’ve attached the logs. It looks like it’s missing an environment variable (i.e. DAGSTER_K8S_INSTANCE_CONFIG_MAP
) — is this something we should set in the executor config?johann
07/02/2021, 2:47 PMenv_config_maps
config and pointing to either your own config map, or the one we create: <dagster name>-user-env
johann
07/02/2021, 2:49 PMjohann
07/02/2021, 2:50 PMCharles Lariviere
07/02/2021, 6:56 PMdagster-yaml
is defined by Dagster’s Helm chart — here’s our values.yaml
for `runLauncher`:
runLauncher:
type: K8sRunLauncher
config:
kubeconfigFile: ~
envConfigMaps: []
envSecrets:
- name: <secrets>
DAGSTER_K8S_INSTANCE_CONFIG_MAP
is not something we added to our config either — I believe that might be coming from Dagster’s helm as well? We’re also not defining our own config map.
I tried with the following config, but now some solids work while others fail without an error message (similar to before).
execution:
k8s:
config:
env_config_maps:
- dagster-pipeline-env
env_secrets:
- <secrets>
The logs for the job don’t show anything suspicious, but kubectl describe
shows this as the last event:
Warning BackoffLimitExceeded 80s job-controller Job has reached the specified backoff limit
Charles Lariviere
07/02/2021, 7:26 PM