Skip to content

Conversation

@bbbugg
Copy link
Contributor

@bbbugg bbbugg commented Aug 29, 2025

💻 变更类型 | Change Type

  • ✨ feat
  • 🐛 fix
  • ♻️ refactor
  • 💄 style
  • 👷 build
  • ⚡️ perf
  • ✅ test
  • 📝 docs
  • 🔨 chore

🔀 变更说明 | Description of Change

变更背景

  • 问题:Grok(xAI)返回的 completion_tokens 不包含 reasoning_tokens,在费用计算中的总输出费用(totalOutputCredit = totalOutputTokens × output 单价)不含思考 token,进而影响总计消耗。
  • 现象:UI 的 Usage 明细中虽然会显示 “深度思考” 分项,但“总计消耗”未包含 reasoning,和期望不符。

变更内容

  • 在 packages/model-runtime/src/utils/usageConverter.ts 的 convertUsage 中,对 provider === 'xai' 增加一处归一化处理:
    • totalOutputTokensNormalized = completion_tokens + reasoning_tokens(仅 xAI)
    • outputTextTokens 的计算保持不变:对 xAI 仍不减去 reasoning token(xAI 的 completion_tokens 本身不含 reasoning),只减去音频token。
    • 返回数据中将 totalOutputTokens 替换为 totalOutputTokensNormalized,其它字段不变。
  • test修改

修改之前输出总计消耗中未包含深度思考
image

修改之后输出总计消耗中已包含深度思考
image

📝 补充信息 | Additional Information

token收取费用说明

Summary by Sourcery

Include reasoning tokens when calculating total output tokens for xAI provider and update related tests.

Bug Fixes:

  • Add reasoning_tokens to totalOutputTokens for xAI in usageConverter to correctly reflect costs.

Tests:

  • Update convertUsage tests to expect normalized totalOutputTokens including reasoning tokens for xAI.

@vercel
Copy link

vercel bot commented Aug 29, 2025

@bbbugg is attempting to deploy a commit to the LobeHub OSS Team on Vercel.

A member of the Team first needs to authorize it.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Aug 29, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Corrects the total output token count for the xAI provider by including reasoning tokens in the calculation, adjusting the usageConverter logic and its corresponding tests.

File-Level Changes

Change Details Files
Introduce normalized totalOutputTokens for xAI by adding reasoning tokens
  • Define totalOutputTokensNormalized to sum completion_tokens and reasoning_tokens when provider is xAI
  • Use totalOutputTokensNormalized in the returned usage instead of raw totalOutputTokens
packages/model-runtime/src/utils/usageConverter.ts
Update test expectations for xAI token calculation
  • Change expected totalOutputTokens from 66 to 447 in the xAI test case
  • Document that reasoning_tokens are priced the same as completion_tokens
packages/model-runtime/src/utils/usageConverter.test.ts

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@dosubot dosubot bot added the size:XS This PR changes 0-9 lines, ignoring generated files. label Aug 29, 2025
@lobehubbot
Copy link
Member

👍 @bbbugg

Thank you for raising your pull request and contributing to our Community
Please make sure you have followed our contributing guidelines. We will review it as soon as possible.
If you encounter any problems, please feel free to connect with us.
非常感谢您提出拉取请求并为我们的社区做出贡献,请确保您已经遵循了我们的贡献指南,我们会尽快审查它。
如果您遇到任何问题,请随时与我们联系。

@gru-agent
Copy link
Contributor

gru-agent bot commented Aug 29, 2025

TestGru Assignment

Summary

Link CommitId Status Reason
Detail 9c3e147 ✅ Finished

History Assignment

Files

File Pull Request
packages/model-runtime/src/utils/usageConverter.ts ❌ Failed (I failed to setup the environment.)

Tip

You can @gru-agent and leave your feedback. TestGru will make adjustments based on your input

@dosubot dosubot bot added the 🐛 Bug label Aug 29, 2025
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link

codecov bot commented Aug 29, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 84.04%. Comparing base (85f9ca5) to head (9c3e147).
⚠️ Report is 7 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff            @@
##             main    #8984     +/-   ##
=========================================
  Coverage   84.04%   84.04%             
=========================================
  Files         870      870             
  Lines       70571    70594     +23     
  Branches     4889     6503   +1614     
=========================================
+ Hits        59309    59332     +23     
  Misses      11256    11256             
  Partials        6        6             
Flag Coverage Δ
app 85.99% <ø> (+<0.01%) ⬆️
database 96.26% <ø> (ø)
packages/electron-server-ipc 74.61% <ø> (ø)
packages/file-loaders 83.59% <ø> (ø)
packages/model-runtime 74.21% <100.00%> (+<0.01%) ⬆️
packages/prompts 100.00% <ø> (ø)
packages/utils 61.07% <ø> (ø)
packages/web-crawler 59.57% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Store 68.86% <ø> (ø)
Services 61.95% <ø> (ø)
Server 66.35% <ø> (ø)
Libs 46.10% <ø> (ø)
Utils 70.63% <ø> (ø)
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@arvinxx arvinxx merged commit 09ce90a into lobehub:main Aug 29, 2025
34 of 36 checks passed
@lobehubbot
Copy link
Member

❤️ Great PR @bbbugg ❤️

The growth of project is inseparable from user feedback and contribution, thanks for your contribution! If you are interesting with the lobehub developer community, please join our discord and then dm @arvinxx or @canisminor1990. They will invite you to our private developer channel. We are talking about the lobe-chat development or sharing ai newsletter around the world.
项目的成长离不开用户反馈和贡献,感谢您的贡献! 如果您对 LobeHub 开发者社区感兴趣,请加入我们的 discord,然后私信 @arvinxx@canisminor1990。他们会邀请您加入我们的私密开发者频道。我们将会讨论关于 Lobe Chat 的开发,分享和讨论全球范围内的 AI 消息。

lobehubbot pushed a commit that referenced this pull request Aug 29, 2025
### [Version&nbsp;1.118.3](v1.118.2...v1.118.3)
<sup>Released on **2025-08-29**</sup>

#### 🐛 Bug Fixes

- **misc**: Correct totalOutputTokens calculation for XAI provider.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### What's fixed

* **misc**: Correct totalOutputTokens calculation for XAI provider, closes [#8984](#8984) ([09ce90a](09ce90a))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
@lobehubbot
Copy link
Member

🎉 This PR is included in version 1.118.3 🎉

The release is available on:

Your semantic-release bot 📦🚀

JamieStivala pushed a commit to jaworldwideorg/OneJA-Bot that referenced this pull request Aug 30, 2025
### [Version&nbsp;1.119.1](v1.119.0...v1.119.1)
<sup>Released on **2025-08-30**</sup>

#### ♻ Code Refactoring

- **misc**: Refactor the `model-bank` package from `src/config/aiModels`.

#### 🐛 Bug Fixes

- **misc**: Correct totalOutputTokens calculation for XAI provider.

#### 💄 Styles

- **misc**: Add Grok Code Fast 1 model, fix chat session part switch theme issue, fix clerk scrollBox style, ModelFetcher support getting prices, support non-stream mode, update DeepSeek V3.1 & Gemini 2.5 Flash Image Preview models, update i18n.

<br/>

<details>
<summary><kbd>Improvements and Fixes</kbd></summary>

#### Code refactoring

* **misc**: Refactor the `model-bank` package from `src/config/aiModels`, closes [lobehub#8983](https://github.com/jaworldwideorg/OneJA-Bot/issues/8983) ([c65eb09](c65eb09))

#### What's fixed

* **misc**: Correct totalOutputTokens calculation for XAI provider, closes [lobehub#8984](https://github.com/jaworldwideorg/OneJA-Bot/issues/8984) ([09ce90a](09ce90a))

#### Styles

* **misc**: Add Grok Code Fast 1 model, closes [lobehub#8982](https://github.com/jaworldwideorg/OneJA-Bot/issues/8982) ([dbcec3d](dbcec3d))
* **misc**: Fix chat session part switch theme issue, closes [lobehub#8987](https://github.com/jaworldwideorg/OneJA-Bot/issues/8987) ([b7111be](b7111be))
* **misc**: Fix clerk scrollBox style, closes [lobehub#8989](https://github.com/jaworldwideorg/OneJA-Bot/issues/8989) ([b25b5a0](b25b5a0))
* **misc**: ModelFetcher support getting prices, closes [lobehub#8985](https://github.com/jaworldwideorg/OneJA-Bot/issues/8985) ([58b73ec](58b73ec))
* **misc**: Support non-stream mode, closes [lobehub#8751](https://github.com/jaworldwideorg/OneJA-Bot/issues/8751) ([ce623bb](ce623bb))
* **misc**: Update DeepSeek V3.1 & Gemini 2.5 Flash Image Preview models, closes [lobehub#8878](https://github.com/jaworldwideorg/OneJA-Bot/issues/8878) ([5d538a2](5d538a2))
* **misc**: Update i18n, closes [lobehub#8990](https://github.com/jaworldwideorg/OneJA-Bot/issues/8990) ([136bc5a](136bc5a))

</details>

<div align="right">

[![](https://img.shields.io/badge/-BACK_TO_TOP-151515?style=flat-square)](#readme-top)

</div>
@bbbugg bbbugg deleted the grok-OutputTokens branch September 1, 2025 02:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

released size:XS This PR changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants