Hey, quick question, I just deployed a FastAPI as ...
# deployment-kubernetes
r
Hey, quick question, I just deployed a FastAPI as a separate k8s deployment that will be exposed to the outside and will be used to transform non-graphql requests to graphql requests to be used by Dagit. However, I'm having issues calling dagit from this new deployment, all the requests to execute pipelines return the following:
Copy code
{
  "data": {
    "launchPipelineExecution": {
      "__typename": "PipelineNotFoundError"
    }
  }
}
However, this does not happen when using docker-compose. For docker-compose im just using the simple approach where dagster and dagit are in the same container. My pipelines repository location is defined as
pipelines_repository
and my workspace.yaml looks as follows:
Copy code
load_from:
  - python_file:
      relative_path: app/pipelines_repository.py
      location_name: pipelines_repository
I believe that the problem is in my graphql with repositoryLocationName that somehow is different when using the kubernetes deployment once dagit, dagster daemon and user code is separated. Do I have to prepend anything to the
repositoryLocationName
in my graphql request to tell dagit where to look at? Edit: These are the parameters passed to my graphql query:
Copy code
{
  "repositoryLocationName": "pipelines_repository",
  "repositoryName": "my_repo",
  "pipelineName": "my_pipeline",
  "runConfigData": {
    "solids": {
      "my_solid": {
        "inputs": {
          "company_name": "my_company"
        }
      }
    },
    "execution": {
      "multiprocess": {
        "config": {
          "max_concurrent": 6
        }
      }
    },
    "storage": {
      "filesystem": {}
    }
  },
  "mode": "default"
}
I managed to fix it, what I thought was exactly what was happening. When using the dagster helm chart deployment, the repositoryLocationName will be the name specified in the userDeployments section of the values.yaml, so use that name instead of "pipelines_repository".
d
Glad you were able to sort it out - you’re right that the helm chart constructs a workspace.yaml file for you if you’re using user deployments, so it wouldn’t know to look at the one that you created manually.