-
-
Notifications
You must be signed in to change notification settings - Fork 13.9k
🐛 fix: correct totalOutputTokens calculation for XAI provider #8984
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@bbbugg is attempting to deploy a commit to the LobeHub OSS Team on Vercel. A member of the Team first needs to authorize it. |
Reviewer's guide (collapsed on small PRs)Reviewer's GuideCorrects the total output token count for the xAI provider by including reasoning tokens in the calculation, adjusting the usageConverter logic and its corresponding tests. File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
👍 @bbbugg Thank you for raising your pull request and contributing to our Community |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #8984 +/- ##
=========================================
Coverage 84.04% 84.04%
=========================================
Files 870 870
Lines 70571 70594 +23
Branches 4889 6503 +1614
=========================================
+ Hits 59309 59332 +23
Misses 11256 11256
Partials 6 6
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
|
❤️ Great PR @bbbugg ❤️ The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world. |
### [Version 1.118.3](v1.118.2...v1.118.3) <sup>Released on **2025-08-29**</sup> #### 🐛 Bug Fixes - **misc**: Correct totalOutputTokens calculation for XAI provider. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### What's fixed * **misc**: Correct totalOutputTokens calculation for XAI provider, closes [#8984](#8984) ([09ce90a](09ce90a)) </details> <div align="right"> [](#readme-top) </div>
|
🎉 This PR is included in version 1.118.3 🎉 The release is available on: Your semantic-release bot 📦🚀 |
### [Version 1.119.1](v1.119.0...v1.119.1) <sup>Released on **2025-08-30**</sup> #### ♻ Code Refactoring - **misc**: Refactor the `model-bank` package from `src/config/aiModels`. #### 🐛 Bug Fixes - **misc**: Correct totalOutputTokens calculation for XAI provider. #### 💄 Styles - **misc**: Add Grok Code Fast 1 model, fix chat session part switch theme issue, fix clerk scrollBox style, ModelFetcher support getting prices, support non-stream mode, update DeepSeek V3.1 & Gemini 2.5 Flash Image Preview models, update i18n. <br/> <details> <summary><kbd>Improvements and Fixes</kbd></summary> #### Code refactoring * **misc**: Refactor the `model-bank` package from `src/config/aiModels`, closes [lobehub#8983](https://github.com/jaworldwideorg/OneJA-Bot/issues/8983) ([c65eb09](c65eb09)) #### What's fixed * **misc**: Correct totalOutputTokens calculation for XAI provider, closes [lobehub#8984](https://github.com/jaworldwideorg/OneJA-Bot/issues/8984) ([09ce90a](09ce90a)) #### Styles * **misc**: Add Grok Code Fast 1 model, closes [lobehub#8982](https://github.com/jaworldwideorg/OneJA-Bot/issues/8982) ([dbcec3d](dbcec3d)) * **misc**: Fix chat session part switch theme issue, closes [lobehub#8987](https://github.com/jaworldwideorg/OneJA-Bot/issues/8987) ([b7111be](b7111be)) * **misc**: Fix clerk scrollBox style, closes [lobehub#8989](https://github.com/jaworldwideorg/OneJA-Bot/issues/8989) ([b25b5a0](b25b5a0)) * **misc**: ModelFetcher support getting prices, closes [lobehub#8985](https://github.com/jaworldwideorg/OneJA-Bot/issues/8985) ([58b73ec](58b73ec)) * **misc**: Support non-stream mode, closes [lobehub#8751](https://github.com/jaworldwideorg/OneJA-Bot/issues/8751) ([ce623bb](ce623bb)) * **misc**: Update DeepSeek V3.1 & Gemini 2.5 Flash Image Preview models, closes [lobehub#8878](https://github.com/jaworldwideorg/OneJA-Bot/issues/8878) ([5d538a2](5d538a2)) * **misc**: Update i18n, closes [lobehub#8990](https://github.com/jaworldwideorg/OneJA-Bot/issues/8990) ([136bc5a](136bc5a)) </details> <div align="right"> [](#readme-top) </div>
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
变更背景
变更内容
修改之前 :

输出和总计消耗中未包含深度思考修改之后:

输出和总计消耗中已包含深度思考📝 补充信息 | Additional Information
token收取费用说明
Summary by Sourcery
Include reasoning tokens when calculating total output tokens for xAI provider and update related tests.
Bug Fixes:
Tests: