# ask-ai

Marie Hénaff

02/29/2024, 1:27 PM
I see the following error in the daemon logs: dagster.daemon.EventLogConsumerDaemon - ERROR - Error calling event event log consumer handler: consume_new_runs_for_automatic_reexecution Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/dagster/_daemon/auto_run_reexecution/", line 86, in run_iteration yield from fn(workspace_process_context, run_records) File "/usr/local/lib/python3.10/site-packages/dagster/_daemon/auto_run_reexecution/", line 168, in consume_new_runs_for_automatic_reexecution for run, retry_number in filter_runs_to_should_retry( File "/usr/local/lib/python3.10/site-packages/dagster/_daemon/auto_run_reexecution/", line 60, in filter_runs_to_should_retry retry_number = get_retry_number(run) File "/usr/local/lib/python3.10/site-packages/dagster/_daemon/auto_run_reexecution/", line 49, in get_retry_number if len(run_group_list) >= max_retries + 1: TypeError: unsupported operand type(s) for +: 'NoneType' and 'int' in dagster version 1.6.6 on kubernetes. Am I missing a configuration?


02/29/2024, 1:27 PM
Based on the error message you've provided, it seems that there is an issue with the
configuration for automatic run re-execution. The error indicates that
, which suggests that it might not be set correctly in your configuration. The documentation provides guidance on how to enable run retries and configure the maximum number of retries. To enable run retries, you should add the following to your `dagster.yaml`:
Copy code
yaml run_retries:   enabled: true   max_retries: 3 # Sets a default for all jobs. 0 if not set
This configuration starts a new daemon that polls the event log for run failure events and retries them according to the specified
. If
is not set, it defaults to 0, which means no retries will be attempted. You can also configure retries using tags either on Job definitions or in the Dagster UI Launchpad. Here's an example of how to set it on a job definition:
Copy code
python from dagster import job  @job(tags={"dagster/max_retries": 3}) def sample_job():     pass
Please ensure that your
includes the
configuration with a valid
value, and if you are using job tags to configure retries, make sure they are set correctly. If you have already configured
and are still encountering this issue, it might be a bug or an edge case not covered by the documentation. In such a case, you may want to check for any related issues on the Dagster GitHub repository or reach out to the Dagster community for further assistance.