Found an interesting edge case using a combination...
# dagster-feedback
v
Found an interesting edge case using a combination of configurable assets and
DynamicPartitions
relating to the Dagster instance. Stacktrace and minimal code example in thread. Essentially, it seems to me like Dagster can’t reach the instance when trying to materialize configurable assets outside of a job. This behavior is consistent both with `multi_asset`s and `asset`s (graph-backed or not), and I tested both the new
Config
API as well as the old
config_schema
parameter.
Stacktrace:
Copy code
dagster._core.errors.PartitionExecutionError: Error occurred during the execution of the partition generation function for partitioned config on job '__ASSET_JOB_0'

  File "/.venv/lib/python3.9/site-packages/dagster/_grpc/impl.py", line 375, in get_partition_names
    return ExternalPartitionNamesData(
  File "/Users/vinicius/.pyenv/versions/3.9.9/lib/python3.9/contextlib.py", line 137, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/.venv/lib/python3.9/site-packages/dagster/_core/errors.py", line 213, in user_code_error_boundary
    raise error_cls(

The above exception was caused by the following exception:
dagster._check.CheckError: Failure condition: The instance is not available to load partitions. You may be seeing this error when using dynamic partitions with a version of dagit or dagster-cloud that is older than 1.1.18.

  File "/.venv/lib/python3.9/site-packages/dagster/_core/errors.py", line 206, in user_code_error_boundary
    yield
  File "/.venv/lib/python3.9/site-packages/dagster/_grpc/impl.py", line 376, in get_partition_names
    partition_names=partition_set_def.get_partition_names()
  File "/.venv/lib/python3.9/site-packages/dagster/_core/definitions/partition.py", line 897, in get_partition_names
    return [part.name for part in self.get_partitions(current_time, dynamic_partitions_store)]
  File "/.venv/lib/python3.9/site-packages/dagster/_core/definitions/partition.py", line 879, in get_partitions
    return self._partitions_def.get_partitions(
  File "/.venv/lib/python3.9/site-packages/dagster/_core/definitions/partition.py", line 673, in get_partitions
    check.failed(
  File "/.venv/lib/python3.9/site-packages/dagster/_check/__init__.py", line 1699, in failed
    raise CheckError(f"Failure condition: {desc}")
Minimal implementation:
Copy code
@asset(
    partitions_def=DynamicPartitionsDefinition(name="foo"),
    config_schema={"some_config": str},
)
def my_partitioned_asset(context):
    return context.partition_key


@multi_asset(
    outs={"asset_1": AssetOut(), "asset_2": AssetOut()},
    config_schema={"some_config": str},
    partitions_def=DynamicPartitionsDefinition(name="foo"),
)
def my_multi_asset(context):
    yield Output(value=context.partition_key, output_name="asset_1")
    yield Output(value=context.partition_key, output_name="asset_2")


defs = Definitions(
    assets=[my_partitioned_asset, my_multi_asset],
    jobs=[
        define_asset_job("multi_asset_job", selection=[my_multi_asset]),
        define_asset_job("asset_job", selection=[my_partitioned_asset]),
    ],
)
s
is it possible that this is the case?
Copy code
dagster._check.CheckError: Failure condition: The instance is not available to load partitions. You may be seeing this error when using dynamic partitions with a version of dagit or dagster-cloud that is older than 1.1.18.
v
I’m running 1.1.20 locally, and the jobs can be run just fine with the partition definition, or the assets if they don’t require config. Goes without saying that I migrated the instance too. I also tested it both within my docker-compose architecture as well as with a “bare” dagster instance with a fresh initialization to make sure. Found the error message very odd Maybe relevant to note that this error shows right when I click the "Materialize" button, not during a job.
By now I’m thinking the error is related to the Launchpad that opens for config inputs in cases when the
Definitions
object has a dynamically partitioned asset; I tested a few more cases and this error happens even if the underlying asset I’m attempting to input configs for is not partitioned. If I launch the run with preconfigured defaults (e.g. by going the route
@graph(config=config)
before transforming the graph into an
AssetsDefinition
), the run can be launched just fine, but errors when shift-clicking the “Materialize” button.
c
@Vinnie @sandy having the same problem. Both my agent and code locations are on 1.1.21
s
@claire - mind looking into this?
c
lmk what other info would be helpful
my sensor will launch these assets successfully, but the error shows up in the launchpad config screen
c
Thanks for the report! I think I might know the cause of this--will look into it later today
j
Any update on this issue? I'm using this to add the partition into the dynamic_partition
Copy code
instance = DagsterInstance.get()
instance.add_dynamic_partitions("fivetran_prod_test_connection_test_03", ["init_load"])
And when I shift+click
Materialize
to add the configuration and got this error:
Copy code
dagster._core.errors.PartitionExecutionError: Error occurred during the execution of the partition generation function for partitioned config on job '__ASSET_JOB_0'

  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_grpc\impl.py", line 375, in get_partition_names
    return ExternalPartitionNamesData(
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\contextlib.py", line 137, in __exit__
    self.gen.throw(typ, value, traceback)
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_core\errors.py", line 213, in user_code_error_boundary
    raise error_cls(

The above exception was caused by the following exception:
dagster._check.CheckError: Failure condition: The instance is not available to load partitions. You may be seeing this error when using dynamic partitions with a version of dagit or dagster-cloud that is older than 1.1.18.

  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_core\errors.py", line 206, in user_code_error_boundary
    yield
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_grpc\impl.py", line 376, in get_partition_names
    partition_names=partition_set_def.get_partition_names()
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_core\definitions\partition.py", line 888, in get_partition_names
    return [part.name for part in self.get_partitions(current_time, dynamic_partitions_store)]
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_core\definitions\partition.py", line 869, in get_partitions
    return self._partitions_def.get_partitions(
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_core\definitions\partition.py", line 690, in get_partitions
    check.failed(
  File "C:\Users\nkduy3\AppData\Local\Programs\Python\Python39\lib\site-packages\dagster\_check\__init__.py", line 1699, in failed
    raise CheckError(f"Failure condition: {desc}")
I’m using version 1.2.1, and the job normally runs when triggered by a sensor or when materializes using the default config.
v
The issue I originally raised is fixed if you upgrade to the latest version.
j
ok, thank you @Vinnie