Skip to content

Add retry on error 429 code 11 from API#549

Open
Pofilo wants to merge 2 commits intojabesq-org:developmentfrom
Pofilo:add-retry-error-429-code-11
Open

Add retry on error 429 code 11 from API#549
Pofilo wants to merge 2 commits intojabesq-org:developmentfrom
Pofilo:add-retry-error-429-code-11

Conversation

@Pofilo
Copy link

@Pofilo Pofilo commented Jan 6, 2026

Resolve #547

Summary by Sourcery

Add retry handling for specific Netatmo API 429 concurrency errors and introduce a dedicated exception type.

Bug Fixes:

  • Handle Netatmo API 429 concurrency errors (code 11) by raising a specific exception and retrying the request with exponential backoff.

Enhancements:

  • Refine error logging and response handling to distinguish concurrency-related 429 errors from existing throttling errors.

Chores:

  • Clean up unused variable warnings in energy module API calls.

@Pofilo Pofilo requested review from cgtobi and jabesq as code owners January 6, 2026 17:46
@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 6, 2026

Reviewer's Guide

Implements a retry mechanism for specific 429 concurrency errors from the Netatmo API by introducing a dedicated ApiTooManyRequestError, detecting it in error handling, and wrapping async_post_api_request with exponential backoff retries, while also adding the relevant HTTP and API error codes to constants and making a minor cleanup in the energy module.

Sequence diagram for async_post_api_request retry on 429 concurrency error

sequenceDiagram
    participant Caller
    participant AbstractAsyncAuth
    participant NetatmoAPI

    Caller->>AbstractAsyncAuth: async_post_api_request(endpoint, params, base_url)

    loop Retry up to MAX_RETRIES
        AbstractAsyncAuth->>NetatmoAPI: async_post_request(url, params)
        NetatmoAPI-->>AbstractAsyncAuth: HTTP 429 error code 11
        AbstractAsyncAuth->>AbstractAsyncAuth: handle_error_response(resp, status, url, resp_json)
        AbstractAsyncAuth-->>AbstractAsyncAuth: throw ApiTooManyRequestError
        AbstractAsyncAuth->>AbstractAsyncAuth: asyncio.sleep(backoff)
        AbstractAsyncAuth->>AbstractAsyncAuth: backoff = backoff * BACKOFF_FACTOR
    end

    AbstractAsyncAuth->>NetatmoAPI: async_post_request(url, params)
    NetatmoAPI-->>AbstractAsyncAuth: success response
    AbstractAsyncAuth-->>Caller: ClientResponse

    alt Max retries reached
        AbstractAsyncAuth-->>Caller: ApiTooManyRequestError
    end
Loading

Class diagram for new ApiTooManyRequestError and AbstractAsyncAuth changes

classDiagram
    class ApiError
    class ApiTooManyRequestError
    ApiError <|-- ApiTooManyRequestError

    class AbstractAsyncAuth {
        +async_post_api_request(endpoint, params, base_url) ClientResponse
        +async_post_request(url, params) ClientResponse
        +process_response(resp, url) ClientResponse
        +handle_error_response(resp, resp_status, url, resp_json) void
    }

    ApiTooManyRequestError <.. AbstractAsyncAuth : raised_by
Loading

Flow diagram for handle_error_response with 429 concurrency handling

flowchart TD
    A[handle_error_response] --> B{resp_status == TOO_MANY_REQUESTS_ERROR_CODE and resp_json.error.code == CONCURRENCY_ERROR_CODE}
    B -- yes --> C[raise ApiTooManyRequestError]
    B -- no --> D{resp_status == FORBIDDEN_ERROR_CODE and resp_json.error.code == THROTTLING_ERROR_CODE}
    D -- yes --> E[raise ApiThrottlingError]
    D -- no --> F[raise ApiError]
Loading

File-Level Changes

Change Details Files
Add exponential backoff retry logic around async POST API requests when a specific 429 concurrency error is raised.
  • Introduce MAX_RETRIES, INITIAL_BACKOFF, and BACKOFF_FACTOR constants to control retry behavior.
  • Wrap async_post_api_request calls to async_post_request in a retry loop that catches ApiTooManyRequestError and sleeps with exponential backoff between attempts.
  • Log debug information on each retry attempt and when the maximum number of retries is reached, re-raising the last error.
src/pyatmo/auth.py
Detect 429 concurrency errors from the API and raise a dedicated exception type.
  • Add TOO_MANY_REQUESTS_ERROR_CODE and CONCURRENCY_ERROR_CODE constants for HTTP 429 and API error code 11.
  • Extend handle_error_response to detect HTTP 429 with error code 11 and raise ApiTooManyRequestError instead of the generic ApiError.
  • Adjust error logging to log the assembled error message rather than raw response content.
src/pyatmo/auth.py
src/pyatmo/const.py
Introduce a new ApiTooManyRequestError exception for 429 concurrency errors.
  • Define ApiTooManyRequestError subclassing ApiError with a dedicated docstring.
  • Import ApiTooManyRequestError into auth.py for use in retry logic.
src/pyatmo/exceptions.py
src/pyatmo/auth.py
Minor cleanup in energy module to avoid unused variable warning from _energy_api_calls return value.
  • Rename filters variable to _filters when capturing the first return value from _energy_api_calls, indicating it is intentionally unused.
src/pyatmo/modules/module.py

Assessment against linked issues

Issue Objective Addressed Explanation
#547 Detect Netatmo concurrency limit errors (HTTP 429 with API error code 11) and treat them as a distinct, retryable error in the client.
#547 Add a retry mechanism with backoff to async_post_request calls (or their wrapper) when this concurrency error occurs, so the request is transparently retried a limited number of times.

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • Consider incorporating jitter into the exponential backoff in async_post_api_request to avoid synchronized retries across multiple clients and reduce the risk of coordinated load spikes.
  • When raising ApiTooManyRequestError, it may be useful to include the HTTP status code and error code (or the raw response payload) on the exception object so callers and logs have more context for diagnosing 429/11 issues.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Consider incorporating jitter into the exponential backoff in `async_post_api_request` to avoid synchronized retries across multiple clients and reduce the risk of coordinated load spikes.
- When raising `ApiTooManyRequestError`, it may be useful to include the HTTP status code and error code (or the raw response payload) on the exception object so callers and logs have more context for diagnosing 429/11 issues.

## Individual Comments

### Comment 1
<location> `src/pyatmo/auth.py:96-113` </location>
<code_context>
-            url=(base_url or self.base_url) + endpoint,
-            params=params,
-        )
+        backoff = INITIAL_BACKOFF
+        error = None
+        for attempt in range(1, MAX_RETRIES + 1):
+            try:
+                return await self.async_post_request(
+                    url=(base_url or self.base_url) + endpoint,
+                    params=params,
+                )
+            except ApiTooManyRequestError as err:
+                if attempt >= MAX_RETRIES:
+                    LOG.debug("Max retry reached %s", err)
+                    error = err
+                    break
+                LOG.debug("Retry (attempt=%d) %s", attempt, err)
+                await asyncio.sleep(backoff)
+                backoff *= BACKOFF_FACTOR
+
+        raise error

     async def async_post_request(
</code_context>

<issue_to_address>
**suggestion:** Preserve traceback and simplify retry logic for ApiTooManyRequestError

Storing the last `ApiTooManyRequestError` and re-raising it later drops the original traceback and adds complexity. Instead, re-raise directly in the `except` block once `MAX_RETRIES` is reached, e.g.:

```python
backoff = INITIAL_BACKOFF
for attempt in range(1, MAX_RETRIES + 1):
    try:
        return await self.async_post_request(
            url=(base_url or self.base_url) + endpoint,
            params=params,
        )
    except ApiTooManyRequestError as err:
        if attempt >= MAX_RETRIES:
            LOG.debug("Max retry reached %s", err)
            raise
        LOG.debug("Retry (attempt=%d) %s", attempt, err)
        await asyncio.sleep(backoff)
        backoff *= BACKOFF_FACTOR
```

This removes the `error` variable and keeps the original stack trace for debugging.

```suggestion
        backoff = INITIAL_BACKOFF
        for attempt in range(1, MAX_RETRIES + 1):
            try:
                return await self.async_post_request(
                    url=(base_url or self.base_url) + endpoint,
                    params=params,
                )
            except ApiTooManyRequestError as err:
                if attempt >= MAX_RETRIES:
                    LOG.debug("Max retry reached %s", err)
                    raise
                LOG.debug("Retry (attempt=%d) %s", attempt, err)
                await asyncio.sleep(backoff)
                backoff *= BACKOFF_FACTOR
```
</issue_to_address>

### Comment 2
<location> `src/pyatmo/auth.py:185-191` </location>
<code_context>
                 f"when accessing '{url}'"
             )

+            if (
+                resp_status == TOO_MANY_REQUESTS_ERROR_CODE
+                and resp_json["error"]["code"] == CONCURRENCY_ERROR_CODE
+            ):
+                raise ApiTooManyRequestError(message)
+
+            LOG.debug("The Netatmo API returned %s", message)
             if (
                 resp_status == FORBIDDEN_ERROR_CODE
</code_context>

<issue_to_address>
**suggestion:** Consider logging 429 concurrency errors as well before raising

In the 429/concurrency case you raise `ApiTooManyRequestError` before logging `message`, while other error paths now log it. To keep observability consistent and help diagnose frequent 429s, consider logging `message` (e.g., at debug level) before raising here as well.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@cgtobi
Copy link
Collaborator

cgtobi commented Jan 7, 2026

Thanks @Pofilo for your contribution. Is there a reason to not use a retrying library like tenacity?

@Pofilo
Copy link
Author

Pofilo commented Jan 8, 2026

The reason is simple, I was not aware of the existence of such a library.

I will update the PR when I find some time !

@Pofilo Pofilo force-pushed the add-retry-error-429-code-11 branch from 9dde34f to 02749a6 Compare February 8, 2026 16:50
@Pofilo
Copy link
Author

Pofilo commented Feb 8, 2026

I added tenacity as suggested and rebased my branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add retry after code 11 concurrency limited

2 participants