https://dagster.io/ logo
Title
a

Alexey Kulakov

02/14/2023, 4:25 PM
Hi all, I have a question regarding to the asset materialization using DagsterGraphQLClient 1. I have an asset with partitions that defined as:
parts = MonthlyPartitionsDefinition(start_date='2022-01-01')
@asset(
    required_resource_keys={"spark"},
    io_manager_key="io_spark_iceberg_parts",
    partitions_def = parts
)
def parts_proc(context, parts_source):
    parts_proc = some_func(parts_source)
    return parts_proc
2. I have an asset job defined as:
assets_for_job = ['parts_proc']
asset_job = define_asset_job(
    name='asset_job', 
    selection=assets_for_job 
)
3. I have some definition to start job from external jupyter:
from dagster_graphql import DagsterGraphQLClient
client = DagsterGraphQLClient("dagster", port_number=3070)

client.submit_job_execution(
    job_name="asset_job",
    repository_name="my_repo"
)
How could I set the list of asset’s partitions that I need to materialize in “submit_job_execution” at the step 3?
o

owen

02/14/2023, 6:53 PM
hi @Alexey Kulakov! in the common case, a single job execution will target only a single partition. the solution for passing in a partition key through this API isn't the most elegant, but you should be able to do it through the run tags, i.e.
client.submit_job_execution(
    job_name="asset_job",
    repository_name="my_repo",
    tags={"dagster/partition": "2020-01-01"}
)
a

Alexey Kulakov

02/15/2023, 8:02 AM
Hi @owen! Thank you so much! Can you recommend the most elegant approach to materialize assets with partitions from Jupyter Notebook? The idea that end users will have some playbook where they can run jobs, check the results and discover the data
j

Julius

04/26/2023, 10:53 AM
Hi, any update of this? Approach to materialize assets with partitions from Jupyter Notebook?