Hi all, I have a question regarding to the asset ...
# ask-community
a
Hi all, I have a question regarding to the asset materialization using DagsterGraphQLClient 1. I have an asset with partitions that defined as:
Copy code
parts = MonthlyPartitionsDefinition(start_date='2022-01-01')
@asset(
    required_resource_keys={"spark"},
    io_manager_key="io_spark_iceberg_parts",
    partitions_def = parts
)
def parts_proc(context, parts_source):
    parts_proc = some_func(parts_source)
    return parts_proc
2. I have an asset job defined as:
Copy code
assets_for_job = ['parts_proc']
asset_job = define_asset_job(
    name='asset_job', 
    selection=assets_for_job 
)
3. I have some definition to start job from external jupyter:
Copy code
from dagster_graphql import DagsterGraphQLClient
client = DagsterGraphQLClient("dagster", port_number=3070)

client.submit_job_execution(
    job_name="asset_job",
    repository_name="my_repo"
)
How could I set the list of asset’s partitions that I need to materialize in “submit_job_execution” at the step 3?
o
hi @Alexey Kulakov! in the common case, a single job execution will target only a single partition. the solution for passing in a partition key through this API isn't the most elegant, but you should be able to do it through the run tags, i.e.
Copy code
client.submit_job_execution(
    job_name="asset_job",
    repository_name="my_repo",
    tags={"dagster/partition": "2020-01-01"}
)
a
Hi @owen! Thank you so much! Can you recommend the most elegant approach to materialize assets with partitions from Jupyter Notebook? The idea that end users will have some playbook where they can run jobs, check the results and discover the data
j
Hi, any update of this? Approach to materialize assets with partitions from Jupyter Notebook?