I have a very simple pipeline, adapted from the `d...
# announcements
I have a very simple pipeline, adapted from the
example that fails (see: https://github.com/jeremyadamsfisher/dagster-sensor-min-failing)
Copy code
from dagster import pipeline, repository, schedule, solid, sensor, SkipReason

def hello(_):
    return 1

def my_pipeline():

@schedule(cron_schedule="* * * * *", pipeline_name="my_pipeline", execution_timezone="US/Central")
def my_schedule(_context):
    return {}

def always_skips(_context):
    yield SkipReason("I always skip!")

def deploy_docker_repository():
    return [my_pipeline, my_schedule, always_skips]
This is the traceback:
Copy code
docker_example_daemon        | 2021-04-03 19:36:56 - SensorDaemon - INFO - Checking for new runs for sensor: always_skips
docker_example_daemon        | 2021-04-03 19:36:56 - SensorDaemon - ERROR - Error launching sensor run: json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
docker_example_daemon        | 
docker_example_daemon        | Stack Trace:
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/daemon/sensor.py", line 230, in execute_sensor_iteration
docker_example_daemon        |     sensor_debug_crash_flags,
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/daemon/sensor.py", line 264, in _evaluate_sensor
docker_example_daemon        |     job_state.job_specific_data.last_run_key if job_state.job_specific_data else None,
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/core/host_representation/repository_location.py", line 448, in get_external_sensor_execution_data
docker_example_daemon        |     last_run_key,
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/api/snapshot_sensor.py", line 41, in sync_get_external_sensor_execution_data_grpc
docker_example_daemon        |     last_run_key=last_run_key,
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/grpc/client.py", line 291, in external_sensor_execution
docker_example_daemon        |     res.serialized_external_sensor_execution_data_or_external_sensor_execution_error
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/serdes/serdes.py", line 236, in deserialize_json_to_dagster_namedtuple
docker_example_daemon        |     check.str_param(json_str, "json_str"), whitelist_map=_WHITELIST_MAP
docker_example_daemon        |   File "/usr/local/lib/python3.7/site-packages/dagster/serdes/serdes.py", line 246, in _deserialize_json_to_dagster_namedtuple
docker_example_daemon        |     return _unpack_value(seven.json.loads(json_str), whitelist_map=whitelist_map)
docker_example_daemon        |   File "/usr/local/lib/python3.7/json/__init__.py", line 361, in loads
docker_example_daemon        |     return cls(**kw).decode(s)
docker_example_daemon        |   File "/usr/local/lib/python3.7/json/decoder.py", line 337, in decode
docker_example_daemon        |     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
docker_example_daemon        |   File "/usr/local/lib/python3.7/json/decoder.py", line 355, in raw_decode
docker_example_daemon        |     raise JSONDecodeError("Expecting value", s, err.value) from None
Am I missing something or should I file a github issue?
hi - my best guess here is that your two containers could be using different versions of dagster and you're running into a version mismatch issue. If you built Dockerfile_pipelines after our most recent release without rebuilding Dockerfile_dagster , I could imagine seeing this error. Could you try rebuilding both containers and skipping the docker cache?
(If that's right and that's the problem we should improve the error message here)
Issue persists after rebuilding both from scratch
However, downgrading to 0.11.2 seems to fix it
Huh, strange - I just tried checking out your code and I'm not running into the problem
I'll play around with it some more to see if I can get a reliably failing example
Then I'll file an issue
I can reproduce that error by forcing Dockerfile_dagster to 0.11.2 and Dockerfile_pipelines to 0.11.3 though
one other small thing (not related to this issue) is that there's a .env file that the example uses to set COMPOSE_PROJECT_NAME - you may want to copy that over as well, or be sure to set COMPOSE_PROJECT_NAME in some other way, as we reference that variable in the docker compose file to set the image to use when launching runs: https://sourcegraph.com/github.com/dagster-io/dagster/-/blob/examples/deploy_docker/docker-compose.yml#L30
Yes, this seems to be the issue
Ok, instead of a bug report, I'll put in a feature request for a clearer error message
Thanks for your help
condagster 1