Is there a way to compute a value as a `@resource`...
# ask-community
m
Is there a way to compute a value as a
@resource
once per job run and then re-use it? When I do something like this:
Copy code
@resource
def some_singleton_for_this_job(init_context):
   return uuid.uuid4().hex
I seem to be getting a new value wherever the resource is used, whereas I would like to get one value shared among all `@op`s. And I don't think simple memoization would work (especially on Kubernetes). I could produce the value from one
@op
and pass it into others, but that's going to add a lot of extra parameter passing. My actual use case is getting a trace context ID to use for all instrumentation in one job, so I can link together spans/events/etc .
r
You could use the run id available in the context as the trace id
m
I think I need to let the observability framework make up the trace ID. But if you know different (for OpenTelemetry in particular) let me know!
a
compute a value as a @resource once per job run and then re-use it?
It depends what executor you are using. For in process, the resource will only be initialized once per run and shared in memory. For multiprocess you will need to use the filesystem or other scheme to share the value between processes. For something like k8s/celery executors you will need to share the value via a database or service that all the machines participating can communicate with.
👍🏼 1
m
OK, thanks. Sounds like computing it as an
@op
and passing it as a parameter may be simplest. Dagster's I/O manager is easier to use than setting up my own storage (: .