Any ideas why this sensor is throwing this error `...
# ask-community
j
Any ideas why this sensor is throwing this error
Copy code
dagster._check.CheckError: Expected non-None value: None
  File "/usr/local/lib/python3.9/site-packages/dagster/_core/errors.py", line 206, in user_code_error_boundary
    yield
  File "/usr/local/lib/python3.9/site-packages/dagster/_grpc/impl.py", line 328, in get_external_sensor_execution
    return sensor_def.evaluate_tick(sensor_context)
  File "/usr/local/lib/python3.9/site-packages/dagster/_core/definitions/sensor_definition.py", line 428, in evaluate_tick
    result = list(self._evaluation_fn(context))
  File "/usr/local/lib/python3.9/site-packages/dagster/_core/definitions/sensor_definition.py", line 598, in _wrapped_fn
    for item in result:
  File "/usr/local/lib/python3.9/site-packages/dagster/_core/definitions/run_status_sensor_definition.py", line 589, in _wrapped_fn
    external_repository_origin = check.not_none(
  File "/usr/local/lib/python3.9/site-packages/dagster/_check/__init__.py", line 1081, in not_none
    raise CheckError(f"Expected non-None value: {additional_message}")
c
mind posting code
j
This is the full error message. Hard to post the code without having to provide too much context and break things apart, I will try to give it a go with a more simple use case but before I do that this is the TL;TR; I have a job "source_job" triggering a couple of airbyte syncs, this job has a couple of sensor status STARTED, SUCESS, etc like the one below:
Copy code
@run_status_sensor(
    run_status=DagsterRunStatus.STARTED,
    request_job=source_status_job,
    monitored_jobs=[source_job],
    default_status=DefaultSensorStatus.RUNNING,
    description="Sensor listening on a source job starting its execution",
)
def source_job_on_started_sensor(context):
    pass
This was all working fine but recently I added a new op to my job. This op basically does the following:
Copy code
@op(
    required_resource_keys={"dbt_projects_resource"},
    ins={"start_after": In(Nothing)},
    config_schema={
        "project_name": Field(str),
        "assets_keys": Field(list),
        "type_selection": Field(str, default_value=""),
    },
    tags={"kind": "seek-connect"},
)
def execute_asset_dbt_deps_job_op(context: OpExecutionContext):
    """
    Execute according to the provided asset keys selection

    Args:
        1. project_name: str
        2. assets_keys: [
            ["netflix", "src_reviews"],
            ["netflix", "dim_hosts_cleansed"]
        ]
        3. type_selection: AssetTypeSelection
    """
    from mydagster_project.repository import my_repository

            job = my_repository.get_job(project_name)
        

        job.execute_in_process(
            instance=context.instance,
        )
Long story short I am executing an asset job which materialized some dbt assetkeys. Both jobs execute as expected but the SUCESS and STATED sensors are failing...
Sorry I took some time to actually try to understand what was going on with this dagster error but was unable to figure it out
This is where the error is being thrown in dagster. What is that
pipeline_run.external_pipeline_origin
?
Ok it looks like it got solved by adding monitor_all_repositories=True, but I keep wondering why?
ty spinny 1
a
it got solved by adding monitor_all_repositories=True
thanks for that, man!!! such an annoying bug, @claire is it known or should smb create an GH issue for it?
in my case I started to get this error when I tried to specify
monitored_jobs
adding
monitor_all_repositories=False
fixed it for a couple of evaluations but then it started failing again...
very confusing behavior
b
Faced the same issue
monitor_all_repositories
works, but it seems to trigger on all jobs ??
j
I had to filter the job to avoid it within the sensor function
ty spinny 1