Hi! It is probably a simple question, but I cannot...
# ask-community
i
Hi! It is probably a simple question, but I cannot figured it out by my own. So my goal is to have the same timestamp on every ops in one run. Is there any way to achieve it without using partitioning? I`ve tried different approaches, but nothing works, like: • Making timestamp a common resource - resources are initiated separately for ops, there is different timestamps, dependent on actual resource initiation • Making timestamp a tag - timestamp is generated on time of code reload and appears to be fixed • Making timestamp part of the config in job - as well, generation happens on reload and fixes the timestamp. So any ideas?
🤖 1
r
can you subclass the io manager you're using e.g. fs_io_manager and override handle_output to add a timestamp?
i
Isn’t io managers initialized just like resources - per ops and not per run? I understand how to add timestamp to output, but have no clue where from to get it.
r
I'm not near a computer right now but I'd poke around the context object to see if there's a timestamp somewhere
i
Unfortunately there is only run_id https://docs.dagster.io/_apidocs/execution#contexts
d
Hey ivan, would using the start time of the run be an option for this? If so you can fetch that via this slightly convoluted code snippet:
Copy code
run_record = context.instance.get_run_records(
    filters=RunsFilter(run_ids=[context.dagster_run.run_id]),
    limit=1
)[0]
run_started_time = run_record.start_time
i
@daniel oh. thank you so much! That was my next thought to get information about run by its id somehow, but you saved my so much time!