UPD :point_up:: configuration example that figured...
# ask-community
d
UPD ☝️: configuration example that figured out just by functions definition is using
hooks={slack_on_failure('#channel_name', my_message_func)
inside
.to_job()
after declaring
my_message_func
by example from docs but unfortunately, I'm getting no notifications in the channel exactly on op failure =/
a
hmm - did you see any mentions of the
hook
in the event log for the run you tried?
y
or any events like HOOK_ERRORED, HOOK_SKIPPED, or HOOK_COMPLETED?
d
sorry for my out, I'm not fully sure, but I will check logs twice when getting back to my office in the morning and let you know asap
blob thumbs up 1
ok, I'm finally got my logs... after every step (op) I have a HOOK_SKIPPED log entry where it sounds like:
Skipped the execution of hook "_hook". It does not meet its triggering conditions during the execution of "op_name".
do I need to provide more code, for example, the message function that was added to the job declaration?
y
what is the failure/error that you expect to trigger the hook? more specifically, did you see HOOK_SKIPPED event for the failed op as well?
d
ok, I found out that hook does not work only when I'm executing a job directly by Python, with
.execute_in_process()
after it has been configured
when executing with dagster binary it works, I'm getting notification 😃
but it would be very nice to have notifications about fails when it is executed directly too... I mean, that Python stops executing only after the hook is worked out... is it possible at all? did anybody try to implement something similar? in my Op that must fail, I'm requesting a downloading of a file from DataLake with a 3rd-party module that is connected as a resource... the Op contains try/except block to check file presence before requesting it, and the special I've tried - is to remove a file from source to get Op failed... is it means that I forced to use in-op configured notification to slack on a step of checking file presence instead of using slack_on_failure hook configured for the whole job (only regarding executing in the process)???
a
ah
execute_in_process
is mostly designed for testing so we default a flag
raise_on_error
to true, so the exception bubbles all the way up to the callsite instead of the normal flow of running hooks. If you set
raise_on_error
to
False
I think it may resolve your issue
❤️ 1
y
hook code will be invoked in process with the job execution, which means if the job execution stops hooks can no longer be executed. the job execution would stop when you execute_in_process with
raise_on_error=True
which is the default
❤️ 1
ah exactly as alex said^
😛 1
we also recommend using Dagit in this case because Dagit won’t stop when a job execution fails/errors. It instead would wrap the error and send helpful error info to the UI, i.e. raise_on_error=False by default.
d
yes, thank you I will try changing this mode and testing our team already decided to use dagster & dagit for huge data processing jobs and to use execute_in_process for small jobs, just because it is very fast to operate not huge data in memory
a
other thing to note for that API is if you want to report the events & runs to your DB you can use
instance=DagsterInstance.get()
😯 1
defaults to an ephemeral in memory one - there will be some perf overhead for writing out the events so its a trade-off for recording histor y
d
it's something interesting, will try to read more about it first