I've been running into a bug I haven't seen before...
# ask-community
n
I've been running into a bug I haven't seen before when trying to start a backfill of a job with a
@static_partitioned_config
. Haven't seen any reference to it in issues or the slack, so wondering if someone can point me in the right direction. Thread>>
My config and job look like:
Copy code
@static_partitioned_config(
    partition_keys=SKYMAPPER_OBJECTS_FILE_PARTITION.get_partition_keys()
)
def skymapper_object_chunk_config(partition_key: str):
    return {
        "ops": {
            "bin_skymapper_objects_by_healpix": {
                "config": {
                    "skymapper_object_chunk_partition_key": partition_key,
                    "gcs_bucket": "adam-dataset-dev",
                    "gcs_prefix": f"{get_current_namespace()}/dagster",
                }
            }
        },
        "resource_defs": {
            "gcs": gcs_resource
        }
    }

@job(
    name="chunk_skymapper_obj_to_hp_job",
    config=skymapper_object_chunk_config
)
def chunk_skymapper_obj_to_hp_job():
    bin_skymapper_objects_by_healpix()
and the error I am seeing happens when starting the backfill, which fails:
Copy code
dagster._core.errors.PartitionExecutionError: Error occurred during the partition config and tag generation for 'obj_000' in partitioned config on job 'chunk_skymapper_obj_to_hp_job'
  File "/usr/local/lib/python3.10/dist-packages/dagster/_grpc/impl.py", line 536, in get_partition_set_execution_param_data
    with user_code_error_boundary(
  File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/usr/local/lib/python3.10/dist-packages/dagster/_core/errors.py", line 211, in user_code_error_boundary
    raise error_cls(
The above exception was caused by the following exception:
TypeError: Shape.__new__() missing 1 required positional argument: 'fields'
  File "/usr/local/lib/python3.10/dist-packages/dagster/_core/errors.py", line 204, in user_code_error_boundary
    yield
  File "/usr/local/lib/python3.10/dist-packages/dagster/_grpc/impl.py", line 539, in get_partition_set_execution_param_data
    run_config = partition_set_def.run_config_for_partition(partition)
  File "/usr/local/lib/python3.10/dist-packages/dagster/_core/definitions/partition.py", line 860, in run_config_for_partition
    return copy.deepcopy(self._user_defined_run_config_fn_for_partition(partition))
  File "/usr/lib/python3.10/copy.py", line 146, in deepcopy
    y = copier(x, memo)
  File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict
.....
.....
For context, Dagit and Dagster are both on 1.3.2, and I have another job with a
@static_partitioned_config
that does not have this error when starting a backfill. From what I can tell, their definitions seem to be pretty much the same in structure.
Does this error type look familiar to anyone more experienced with Dagster than I? The full stacktrace just keeps going down in deep copies until. I haven't been able to grok what's wrong yet.
Copy code
File "/usr/lib/python3.10/copyreg.py", line 101, in __newobj__
    return cls.__new__(cls, *args)
c
The error message is definitely bizarre - but you shouldn’t be passing resource_defs from your partitions definition along with config - that should be defined on the job
👍 1
n
That did, in fact, fix it. Oops. Thanks!