Maksym Domariev
07/28/2022, 3:48 AMfor graph in graphs:
results.append(<http://graph.to|graph.to>_job(resource_defs=resources, name=graph.name+"_env"))
return results
in resources. That generates jobs from graphs ops that are configured
after that I created a job that is doing partitioning and run from that job this <http://graph.to|graph.to>_job
when I run graph separately, as a job, my configuration e.g. resource key in nested op of a graph, works fine.
However after I run graph from a job, this configuration is not working and I have something like
UserWarning: Error loading repository location hello_flow:dagster.core.errors.DagsterInvalidDefinitionError: resource with key 'druid_db_client' required by op 'load_threads_graph.load_threads' was not provided
Are there any workaround for that?
@job(config=my_offset_partitioned_config)
def executute_timeseries_query():
load_threads_graph(get_query_timeframe())
this is how my job looks like, load_threads_graph
is the graph, that has it's on job with resource.sean
07/28/2022, 12:35 PMdruid_db_client
resource but does not define it. Any of your jobs that use the load_threads_graph
need to supply druid_db_client
in resource_defs
, since it is required by the constituent op `load_threads`:
@job(resource_defs={"druid_db_client": VALUE})
def my_job():
load_threads_graph()
# or
load_threads_graph.to_job(resource_defs={"druid_db_client": VALUE})
Maksym Domariev
07/28/2022, 6:39 PMMaksym Domariev
07/28/2022, 6:40 PMMaksym Domariev
07/28/2022, 6:41 PM