https://dagster.io/ logo
#deployment-ecs
Title
# deployment-ecs
f

Fraser Marlow

08/17/2023, 3:53 AM
I am reposting here for @PHƯƠNG LÊ THỊ NGỌC :
Hi everyone,
I am facing with some issues about aws policies to allow permission for user when deploying the Dagster project to ECS.
Previously, I assigned full access policies to test Dagster’s pipeline. Once the pipeline was stable, I implemented the restricted policies. However, when I did this, AWS maybe not create a run_id and launch a new task.
Here are the dagster.yaml and docker-compose.yaml files that I used.
I have attached the error message in the image below.
Please help me if I have missed any policies that are required to perform these events.
dagster.yaml
Copy code
scheduler:
  module: dagster.core.scheduler
  class: DagsterDaemonScheduler
run_coordinator:
  module: dagster.core.run_coordinator
  class: QueuedRunCoordinator
run_launcher:
  module: dagster_aws.ecs
  class: EcsRunLauncher
  config:
    include_sidecars: true
    secrets_tag: ""
    run_resources:
      cpu: "256"
      memory: "1024"
run_storage:
  module: dagster_postgres.run_storage
  class: PostgresRunStorage
  config:
    postgres_db:
      hostname: postgresql
      username:
        env: DAGSTER_POSTGRES_USER
      password:
        env: DAGSTER_POSTGRES_PASSWORD
      db_name:
        env: DAGSTER_POSTGRES_DB
      port: 5432
schedule_storage:
  module: dagster_postgres.schedule_storage
  class: PostgresScheduleStorage
  config:
    postgres_db:
      hostname: postgresql
      username:
        env: DAGSTER_POSTGRES_USER
      password:
        env: DAGSTER_POSTGRES_PASSWORD
      db_name:
        env: DAGSTER_POSTGRES_DB
      port: 5432
event_log_storage:
  module: dagster_postgres.event_log
  class: PostgresEventLogStorage
  config:
    postgres_db:
      hostname: postgresql
      username:
        env: DAGSTER_POSTGRES_USER
      password:
        env: DAGSTER_POSTGRES_PASSWORD
      db_name:
        env: DAGSTER_POSTGRES_DB
      port: 5432
Docker-compose.yaml
Copy code
---
version: "3.8"
services:
  # This service runs dagit. It has no user code; instead it loads its
  # jobs from the gRPC server running in the user_code service.
  # Because our instance uses the QueuedRunCoordinator, any runs submitted from
  # dagit will be put on a queue and later dequeued and launched by
  # the dagster-daemon service.
  dagit:
    platform: linux/amd64
    build:
      context: .
      dockerfile: ./Dockerfile
      target: dagit
    image: "$REGISTRY_URL/deploy_ecs/dagit"
    container_name: dagit
    command: "dagit -h 0.0.0.0 -p 3000 -w workspace.yaml"
    ports:
      - "3000:3000"
    environment:
      DAGSTER_POSTGRES_DB: "postgres"
      DAGSTER_POSTGRES_HOSTNAME: "postgresql"
      DAGSTER_POSTGRES_PASSWORD: "postgres_password"
      DAGSTER_POSTGRES_USER: "postgres_user"
    restart: on-failure
    depends_on:
      - postgresql
      - user_code
    x-aws-role:
      Statement:
        - Effect: "Allow"
          Action:
            - "ecs:DescribeTasks"
            - "ecs:StopTask"
          Resource:
            - "*"
        - Effect: "Allow"
          Action:
            - "iam:PassRole"
          Resource:
            - "*"
          Condition:
            StringLike:
              iam:PassedToService: "<http://ecs-tasks.amazonaws.com|ecs-tasks.amazonaws.com>"
  # This service runs the dagster-daemon process, which is responsible for
  # taking runs off of the queue and launching them, as well as creating
  # runs from schedules or sensors.
  daemon:
    platform: linux/amd64
    build:
      context: .
      dockerfile: ./Dockerfile
      target: dagster
    image: "$REGISTRY_URL/deploy_ecs/daemon"
    container_name: daemon
    command: "dagster-daemon run"
    environment:
      DAGSTER_POSTGRES_HOSTNAME: "postgresql"
      DAGSTER_POSTGRES_USER: "postgres_user"
      DAGSTER_POSTGRES_PASSWORD: "postgres_password"
      DAGSTER_POSTGRES_DB: "postgres"
    depends_on:
      - postgresql
      - user_code
    x-aws-role:
      Statement:
        - Effect: "Allow"
          Action:
            - "ec2:DescribeNetworkInterfaces"
            - "ecs:DescribeTaskDefinition"
            - "ecs:DescribeTasks"
            - "ecs:ListAccountSettings"
            - "ecs:RegisterTaskDefinition"
            - "ecs:RunTask"
            - "ecs:TagResource"
            - "secretsmanager:DescribeSecret"
            - "secretsmanager:ListSecrets"
            - "secretsmanager:GetSecretValue"
          Resource:
            - "*"
        - Effect: "Allow"
          Action:
            - "iam:PassRole"
          Resource:
            - "*"
          Condition:
            StringLike:
              iam:PassedToService: "<http://ecs-tasks.amazonaws.com|ecs-tasks.amazonaws.com>"
  # This service runs a gRPC server that serves information about your
  # repository. By setting DAGSTER_CURRENT_IMAGE to its own image, we tell the
  # run launcher to use this same image when launching runs in a new container.
  # Multiple containers like this can be deployed separately - each needs to
  # run on its own port and have its own entry in the workspace.yaml file.
  user_code:
    platform: linux/amd64
    build:
      context: .
      dockerfile: ./Dockerfile
      target: user_code
    image: "$REGISTRY_URL/deploy_ecs/user_code"
    container_name: user_code
    command: "dagster api grpc -h 0.0.0.0 -p 4000 -m dagster_dev_project"
    environment:
      DAGSTER_POSTGRES_DB: "postgres"
      DAGSTER_POSTGRES_HOSTNAME: "postgresql"
      DAGSTER_POSTGRES_PASSWORD: "postgres_password"
      DAGSTER_POSTGRES_USER: "postgres_user"
      DAGSTER_CURRENT_IMAGE: "$REGISTRY_URL/deploy_ecs/user_code"
  # This service runs the postgres DB used by dagster for run storage, schedule
  # storage, and event log storage. In a real deployment, you might choose to
  # remove this in favor of an RDS instance.
  postgresql:
    image: postgres:11
    container_name: postgresql
    environment:
      POSTGRES_DB: "postgres_db"
      POSTGRES_PASSWORD: "postgres_password"
      POSTGRES_USER: "postgres_user"
    restart: on-failure
These are the policies that I have assigned.
ecs:CreateCluster
ecs:CreateService
ecs:DeleteCluster
ecs:DeleteService
ecs:DeregisterTaskDefinition
ecs:DescribeClusters
ecs:DescribeServices
ecs:DescribeTasks
ecs:ListAccountSettings
ecs:ListTasks
ecs:RegisterTaskDefinition
ecs:UpdateService
elasticloadbalancing:*
iam:AttachRolePolicy
iam:CreateRole
iam:DeleteRole
iam:DetachRolePolicy
iam:PassRole
logs:CreateLogGroup
logs:DeleteLogGroup
logs:DescribeLogGroups
logs:FilterLogEvents
route53:CreateHostedZone
route53:DeleteHostedZone
route53:GetHealthCheck
route53:GetHostedZone
route53:ListHostedZonesByName
servicediscovery:*
iam:DeleteRolePolicy
iam:PutRolePolicy
iam:GetRolePolicy
s3:*
7 Views