Hi. I'm trying to run an amended version of the <d...
# deployment-ecs
t
Hi. I'm trying to run an amended version of the deploy ecs example by deploying dagit locally, but running the user code on ECS, and I'm running into the following error message. I'd greatly appreciate any help with what could be going wrong. My workflow is as follows: 1. I can run the unamended deploy ecs example. 2. I can also run the user code (repo.py) as a local grpc server and run dagit locally (with the grpc_server in workspace.yaml) 3. If I add the ECSRunLauncher to dagster.yaml (with the task definition as created in part 1) when I try to run the job in the user code the following error occurs. Error message:
Copy code
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'

  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_graphql/implementation/utils.py", line 101, in _fn
    return fn(*args, **kwargs)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 31, in launch_pipeline_execution
    return _launch_pipeline_execution(graphene_info, execution_params)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 67, in _launch_pipeline_execution
    run = do_launch(graphene_info, execution_params, is_reexecuted)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_graphql/implementation/execution/launch_execution.py", line 53, in do_launch
    workspace=graphene_info.context,
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster/_core/instance/__init__.py", line 1858, in submit_run
    SubmitRunContext(run, workspace=workspace)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster/_core/run_coordinator/default_run_coordinator.py", line 34, in submit_run
    self._instance.launch_run(pipeline_run.run_id, context.workspace)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster/_core/instance/__init__.py", line 1910, in launch_run
    self.run_launcher.launch_run(LaunchRunContext(pipeline_run=run, workspace=workspace))
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_aws/ecs/launcher.py", line 311, in launch_run
    run_task_kwargs = self._run_task_kwargs(run, image, container_context)
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_aws/ecs/launcher.py", line 515, in _run_task_kwargs
    current_task_metadata = get_current_ecs_task_metadata()
  File "/home/terry/.pyenv/versions/3.7.11/envs/etl_new_env/lib/python3.7/site-packages/dagster_aws/ecs/tasks.py", line 240, in get_current_ecs_task_metadata
    task_metadata_uri = _container_metadata_uri() + "/task"
dagster.yaml:
Copy code
run_launcher:
  module: "dagster_aws.ecs"
  class: "EcsRunLauncher"
  config:
    task_definition: "arn:aws:ecs:us-west-1:123456789012:task-definition/dagster-task:1"
    container_name: "run"
d
Hi Terry - there's a "use_current_ecs_task_config" field on the run launcher that you'll want to set to False in order for this to work: https://docs.dagster.io/_apidocs/libraries/dagster-aws#dagster_aws.ecs.EcsRunLauncher you may also need to set the run_task_kwargs field as well to include some fields that are normally pulled from the calling task (like the cluster name)
t
Thanks. As you advised, I updated this and added networking configuration settings relating to the cluster. Dagit will now launch the run in the ECSTaskLauncher but it never completes (or raises an error, it's just awaiting a response). The task starts in ECS but raises an error (via Cloudwatch) of
dagster._check.CheckError: Expected non-None value: Pipeline run with id 'ffd05cb7-eb17-4e7a-9a50-3adcd0c1c616' not found for run execution.
d
I think this will work once you are using a DB like postgres that isn't restricted to the local filesystem
👍 1
https://docs.dagster.io/deployment/dagster-instance#dagster-storage the "postgres storage" tab here has an example dagster.yaml
t
This solved my issue, thanks. As a follow-up question, is it possible to use different RunLaunchers/ task definitions for different CodeLocations? The use case is a) running some jobs locally and some jobs on ecs, and b) using different ecs clusters for different jobs
d
There isn't currently (other than writing your own which is a pain). Is that something you wouldn't mind filing a github issue for?
task definitions we can support actually, just not different run launcher
t
Sure. Are you saying that it's currently possible to use different task definitions (if so how?) and to file for the different RunLaunchers?
d
this thread shows how to configure the grpc server so that it has a particular task definition for any run that uses it: https://dagster.slack.com/archives/C014UDS8LAV/p1668594870875999?thread_ts=1667921690.467969&amp;cid=C014UDS8LAV
(its a bit buried/experimental currently)
feature request issues can be filed from here: https://github.com/dagster-io/dagster/issues
t
Thanks, you're a star
s
Facing same issue . Any fix for this issue?
Copy code
dagster._check.CheckError: Expected non-None value: Pipeline run with id 'ffd05cb7-eb17-4e7a-9a50-3adcd0c1c616' not found for run execution.
d
Assuming we are talking about ECS, the same fix I recommended here would apply https://dagster.slack.com/archives/C014UDS8LAV/p1673536938946569?thread_ts=1673476412.189179&amp;channel=C014UDS8LAV&amp;message_ts=1673536938.946569 - if that’s not it could you possibly make a new post with this question?
s
I am facing it on k8s_job_executor with dagster.execute_job
d
Got it - this is the ecs channel so would definitely recommend a separate post
s
Okay