hey, I'm doing graph initialization : `for graph i...
# ask-community
m
hey, I'm doing graph initialization :
for graph in graphs:
results.append(<http://graph.to|graph.to>_job(resource_defs=resources, name=graph.name+"_env"))
return results
in resources. That generates jobs from graphs ops that are configured after that I created a job that is doing partitioning and run from that job this
<http://graph.to|graph.to>_job
when I run graph separately, as a job, my configuration e.g. resource key in nested op of a graph, works fine. However after I run graph from a job, this configuration is not working and I have something like
UserWarning: Error loading repository location hello_flow:dagster.core.errors.DagsterInvalidDefinitionError: resource with key 'druid_db_client' required by op 'load_threads_graph.load_threads' was not provided
Are there any workaround for that?
@job(config=my_offset_partitioned_config)
def executute_timeseries_query():
load_threads_graph(get_query_timeframe())
this is how my job looks like,
load_threads_graph
is the graph, that has it's on job with resource.
s
This warning means that you have a job somewhere that requires the
druid_db_client
resource but does not define it. Any of your jobs that use the
load_threads_graph
need to supply
druid_db_client
in
resource_defs
, since it is required by the constituent op `load_threads`:
Copy code
@job(resource_defs={"druid_db_client": VALUE})
def my_job():
    load_threads_graph()

# or

load_threads_graph.to_job(resource_defs={"druid_db_client": VALUE})
m
I did added it to resoruce def, but it's not picking it up
If you supply it separately to the Graph that converted to job itself, it works
if you run graph using another job, i have this error