Add retry on error 429 code 11 from API#549
Add retry on error 429 code 11 from API#549Pofilo wants to merge 2 commits intojabesq-org:developmentfrom
Conversation
Reviewer's GuideImplements a retry mechanism for specific 429 concurrency errors from the Netatmo API by introducing a dedicated ApiTooManyRequestError, detecting it in error handling, and wrapping async_post_api_request with exponential backoff retries, while also adding the relevant HTTP and API error codes to constants and making a minor cleanup in the energy module. Sequence diagram for async_post_api_request retry on 429 concurrency errorsequenceDiagram
participant Caller
participant AbstractAsyncAuth
participant NetatmoAPI
Caller->>AbstractAsyncAuth: async_post_api_request(endpoint, params, base_url)
loop Retry up to MAX_RETRIES
AbstractAsyncAuth->>NetatmoAPI: async_post_request(url, params)
NetatmoAPI-->>AbstractAsyncAuth: HTTP 429 error code 11
AbstractAsyncAuth->>AbstractAsyncAuth: handle_error_response(resp, status, url, resp_json)
AbstractAsyncAuth-->>AbstractAsyncAuth: throw ApiTooManyRequestError
AbstractAsyncAuth->>AbstractAsyncAuth: asyncio.sleep(backoff)
AbstractAsyncAuth->>AbstractAsyncAuth: backoff = backoff * BACKOFF_FACTOR
end
AbstractAsyncAuth->>NetatmoAPI: async_post_request(url, params)
NetatmoAPI-->>AbstractAsyncAuth: success response
AbstractAsyncAuth-->>Caller: ClientResponse
alt Max retries reached
AbstractAsyncAuth-->>Caller: ApiTooManyRequestError
end
Class diagram for new ApiTooManyRequestError and AbstractAsyncAuth changesclassDiagram
class ApiError
class ApiTooManyRequestError
ApiError <|-- ApiTooManyRequestError
class AbstractAsyncAuth {
+async_post_api_request(endpoint, params, base_url) ClientResponse
+async_post_request(url, params) ClientResponse
+process_response(resp, url) ClientResponse
+handle_error_response(resp, resp_status, url, resp_json) void
}
ApiTooManyRequestError <.. AbstractAsyncAuth : raised_by
Flow diagram for handle_error_response with 429 concurrency handlingflowchart TD
A[handle_error_response] --> B{resp_status == TOO_MANY_REQUESTS_ERROR_CODE and resp_json.error.code == CONCURRENCY_ERROR_CODE}
B -- yes --> C[raise ApiTooManyRequestError]
B -- no --> D{resp_status == FORBIDDEN_ERROR_CODE and resp_json.error.code == THROTTLING_ERROR_CODE}
D -- yes --> E[raise ApiThrottlingError]
D -- no --> F[raise ApiError]
File-Level Changes
Assessment against linked issues
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Hey - I've found 2 issues, and left some high level feedback:
- Consider incorporating jitter into the exponential backoff in
async_post_api_requestto avoid synchronized retries across multiple clients and reduce the risk of coordinated load spikes. - When raising
ApiTooManyRequestError, it may be useful to include the HTTP status code and error code (or the raw response payload) on the exception object so callers and logs have more context for diagnosing 429/11 issues.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider incorporating jitter into the exponential backoff in `async_post_api_request` to avoid synchronized retries across multiple clients and reduce the risk of coordinated load spikes.
- When raising `ApiTooManyRequestError`, it may be useful to include the HTTP status code and error code (or the raw response payload) on the exception object so callers and logs have more context for diagnosing 429/11 issues.
## Individual Comments
### Comment 1
<location> `src/pyatmo/auth.py:96-113` </location>
<code_context>
- url=(base_url or self.base_url) + endpoint,
- params=params,
- )
+ backoff = INITIAL_BACKOFF
+ error = None
+ for attempt in range(1, MAX_RETRIES + 1):
+ try:
+ return await self.async_post_request(
+ url=(base_url or self.base_url) + endpoint,
+ params=params,
+ )
+ except ApiTooManyRequestError as err:
+ if attempt >= MAX_RETRIES:
+ LOG.debug("Max retry reached %s", err)
+ error = err
+ break
+ LOG.debug("Retry (attempt=%d) %s", attempt, err)
+ await asyncio.sleep(backoff)
+ backoff *= BACKOFF_FACTOR
+
+ raise error
async def async_post_request(
</code_context>
<issue_to_address>
**suggestion:** Preserve traceback and simplify retry logic for ApiTooManyRequestError
Storing the last `ApiTooManyRequestError` and re-raising it later drops the original traceback and adds complexity. Instead, re-raise directly in the `except` block once `MAX_RETRIES` is reached, e.g.:
```python
backoff = INITIAL_BACKOFF
for attempt in range(1, MAX_RETRIES + 1):
try:
return await self.async_post_request(
url=(base_url or self.base_url) + endpoint,
params=params,
)
except ApiTooManyRequestError as err:
if attempt >= MAX_RETRIES:
LOG.debug("Max retry reached %s", err)
raise
LOG.debug("Retry (attempt=%d) %s", attempt, err)
await asyncio.sleep(backoff)
backoff *= BACKOFF_FACTOR
```
This removes the `error` variable and keeps the original stack trace for debugging.
```suggestion
backoff = INITIAL_BACKOFF
for attempt in range(1, MAX_RETRIES + 1):
try:
return await self.async_post_request(
url=(base_url or self.base_url) + endpoint,
params=params,
)
except ApiTooManyRequestError as err:
if attempt >= MAX_RETRIES:
LOG.debug("Max retry reached %s", err)
raise
LOG.debug("Retry (attempt=%d) %s", attempt, err)
await asyncio.sleep(backoff)
backoff *= BACKOFF_FACTOR
```
</issue_to_address>
### Comment 2
<location> `src/pyatmo/auth.py:185-191` </location>
<code_context>
f"when accessing '{url}'"
)
+ if (
+ resp_status == TOO_MANY_REQUESTS_ERROR_CODE
+ and resp_json["error"]["code"] == CONCURRENCY_ERROR_CODE
+ ):
+ raise ApiTooManyRequestError(message)
+
+ LOG.debug("The Netatmo API returned %s", message)
if (
resp_status == FORBIDDEN_ERROR_CODE
</code_context>
<issue_to_address>
**suggestion:** Consider logging 429 concurrency errors as well before raising
In the 429/concurrency case you raise `ApiTooManyRequestError` before logging `message`, while other error paths now log it. To keep observability consistent and help diagnose frequent 429s, consider logging `message` (e.g., at debug level) before raising here as well.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
Thanks @Pofilo for your contribution. Is there a reason to not use a retrying library like tenacity? |
|
The reason is simple, I was not aware of the existence of such a library. I will update the PR when I find some time ! |
bdad5cc to
9dde34f
Compare
9dde34f to
02749a6
Compare
|
I added |
Resolve #547
Summary by Sourcery
Add retry handling for specific Netatmo API 429 concurrency errors and introduce a dedicated exception type.
Bug Fixes:
Enhancements:
Chores: