You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Refine token size calculation and model selection in Coder class
Resolves#25
In this commit, we've made several adjustments to the `Coder` class in `aicodebot/coder.py` and `aicodebot/cli.py`. The token size calculation now includes a 5% buffer, down from 10%, to account for the occasional underestimation by the `tiktoken` library. The `get_token_length` method now defaults to the `gpt-4` model for token counting, and the debug output has been improved for readability.
In `aicodebot/cli.py`, we've adjusted the `model_name` calculation in several methods to include `response_token_size` in the token count. This ensures that the selected model can handle the combined size of the request and response. In the `sidekick` method, we've also introduced a `memory_token_size` to allow for a decent history.
These changes should improve the accuracy of model selection and prevent errors when the token count exceeds the model's limit.
0 commit comments