Jerry Bao
02/28/2024, 9:46 PMScout
02/28/2024, 9:46 PMslack_urgent_failure
. - Description - Enter a description for the alert policy. For asset-based alerts, fill out these additional options: - Asset group - Select the asset group to monitor. You will have the option to select from all asset groups in the deployment. - Asset key - Select the asset key to monitor. You will have the option to select from all asset keys in the deployment. Note: If you select an asset group, you will not be able to select an asset key. - Events - Select whether the alert should trigger on asset materialization failure, asset materialization success, asset check error, asset check warn, asset check passed, or asset check failure to execute. Finally, select the Notification service you wish to use, such as Slack, and configure it accordingly. Remember that alert policies are configured on a per-deployment basis, so asset alerts configured in a prod
deployment are only applicable to assets in that deployment. Currently, Slack and email notifications are supported. Please note that asset-based alert policies are still experimental and may be subject to change as we gather user feedback. Additionally, Organization Admin, Admin, or Editor permissions are required to manage alerts in Dagster Cloud. If you're a Dagster Cloud Admin or Editor, you can only manage alerts in deployments where you're an Admin.Jerry Bao
02/28/2024, 9:47 PMScout
02/28/2024, 9:47 PMdatadog
Python package, which provides a Python client for interacting with the Datadog API. You can install it using `pip`:
shell
pip install datadog
3. Dagster Resources: Create a custom Dagster resource that encapsulates the logic for sending events to Datadog. This resource will use the Datadog client to send events.
4. Asset Observers: Implement asset observers in your Dagster repository. Asset observers are hooks that allow you to perform custom logic when an asset is materialized or when materialization fails.
5. Send Events on Failure: Within the asset observer, check for materialization failure events and use the Datadog resource to send an event to Datadog when a failure occurs.
Here's a simplified example of how you might set up a custom resource and asset observer to send events to Datadog:
python
from dagster import resource, AssetMaterialization, AssetKey, EventRecordsFilter
from datadog import initialize, api
# Initialize the Datadog client
options = {
'api_key': 'your_api_key',
'app_key': 'your_app_key'
}
initialize(**options)
@resource
def datadog_resource(_):
def post_event(title, text, tags):
api.Event.create(title=title, text=text, tags=tags)
return post_event
@asset_observer(required_resource_keys={"datadog"})
def datadog_asset_observer(context, event):
if isinstance(event, AssetMaterialization) and not event.materialization_success:
asset_key = event.asset_key
# Construct a title and message for the Datadog event
title = f"Asset Materialization Failure: {asset_key}"
text = f"An asset with key {asset_key} failed to materialize."
tags = ['dagster', f'asset:{asset_key}']
# Send the event to Datadog
context.resources.datadog.post_event(title, text, tags)
# In your repository definition, include the resource and observer
@repository
def my_repository():
return [
...,
datadog_asset_observer,
datadog_resource,
...
]
Please note that this is a high-level example and you would need to adapt it to your specific use case and environment. Additionally, you should handle authentication and error checking appropriately.
Remember to replace `'your_api_keyScout
02/28/2024, 9:47 PM'your_app_key'
with your actual Datadog API and application keys. You should also ensure that the Datadog client is properly configured to communicate with the Datadog API from your environment. Keep in mind that the actual implementation details may vary based on the version of Dagster you are using and any updates to the Dagster or Datadog APIs that have occurred after my knowledge cutoff date. Always refer to the latest documentation for both Dagster and Datadog for the most up-to-date guidance.Jerry Bao
02/28/2024, 9:53 PM@asset_observer
is not a decorator that exists in the APIScout
02/28/2024, 9:53 PM@asset_observer
is not a decorator in the Dagster API. Instead, you can use Dagster's event hooks to respond to asset events. Here's how you can set up a Dagster event hook to send an alert to Datadog when an asset fails to materialize: 1. Create a Hook: Define a custom event hook using the @event_hook
decorator. This hook will be triggered after the execution of an op. 2. Check for Asset Materialization Events: Within the hook, check the event log for AssetMaterialization
events and determine if there was a failure. 3. Send an Event to Datadog: If there was a failure, use the Datadog API to send an event. Here's an example of how you might implement this: python from dagster import event_hook, HookContext from datadog import initialize, api # Initialize the Datadog client options = { 'api_key': 'your_api_key', 'app_key': 'your_app_key' } initialize(**options) def send_datadog_event(title, text, tags): api.Event.create(title=title, text=text, tags=tags) @event_hook(required_resource_keys={"datadog"}) def datadog_event_hook(context: HookContext): for event_record in context.event_log_entry.dagster_event.event_specific_data.materialization_events: if not event_record.materialization_success: asset_key = event_record.asset_key.to_string() title = f"Asset Materialization Failure: {asset_key}" text = f"An asset with key {asset_key} failed to materialize." tags = ['dagster', f'asset:{asset_key}'] send_datadog_event(title, text, tags) # In your job definition, include the hook @job(hooks={datadog_event_hook}) def my_job(): my_op()
In this example, my_op
would be an op that you expect to materialize an asset. The datadog_event_hook
is attached to the job and will be triggered after my_op
executes. If the asset fails to materialize, the hook will send an event to Datadog. Please ensure that you have the datadog
Python package installed and that you replace 'your_api_key'
and 'your_app_key'
with your actual Datadog API and application keys. Keep in mind that this example is a simplified illustration and you may need to adapt it to fit the specifics of your Dagster setup and the structure of your assets and ops. Always refer to the latest Dagster documentation for the most current patterns and practices for implementing hooks and handling events.Jerry Bao
02/28/2024, 9:55 PM@event_hook
is also not a decoratorScout
02/28/2024, 9:55 PM@event_hook
is not a decorator in the Dagster API. Instead, Dagster provides hooks that can be triggered on op success or failure. To send an alert to Datadog when an asset fails to materialize, you can use a @failure_hook
. Here's an example of how you might set up a failure hook to send an alert to Datadog: python from dagster import failure_hook, HookContext from datadog import initialize, api # Initialize the Datadog client options = { 'api_key': 'your_api_key', 'app_key': 'your_app_key' } initialize(**options) def send_datadog_event(title, text, tags): api.Event.create(title=title, text=text, tags=tags) @failure_hook(required_resource_keys={"datadog"}) def datadog_failure_hook(context: HookContext): # You can access the op name and error information from the context op_name = context.op.name error_info = context.op_exception title = f"Failure in op: {op_name}" text = f"An error occurred in op {op_name}: {error_info}" tags = ['dagster', f'op:{op_name}'] send_datadog_event(title, text, tags) # In your job definition, include the hook @job(hooks={datadog_failure_hook}) def my_job(): my_op()
In this example, my_op
would be an op that you expect to materialize an asset. The datadog_failure_hook
is attached to the job and will be triggered if my_op
fails. If the op fails, the hook will send an event to Datadog. Please ensure that you have the datadog
Python package installed and that you replace 'your_api_key'
and 'your_app_key'
with your actual Datadog API and application keys. Keep in mind that this example is a simplified illustration and you may need to adapt it to fit the specifics of your Dagster setup and the structure of your assets and ops. Always refer to the latest Dagster documentation for the most current patterns and practices for implementing hooks and handling events.Jerry Bao
02/28/2024, 9:57 PMScout
02/28/2024, 9:57 PM