-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Open
Description
Problem
The token count displays 0% when the model is not loaded. This provides no useful information to users and creates a poor user experience, especially when users are composing messages in a thread.
Steps to Reproduce
- Open a thread without loading the model
- Start typing a message
- Observe the token count indicator shows 0%
Expected Behavior
One of the following should occur:
- Model should be automatically loaded when a thread is opened, OR
- Token count should show an estimated value based on a default model's context size, OR
- Display a clear indicator that the model needs to be loaded for accurate token counting
Actual Behavior
Token count shows 0%, providing no useful feedback to users about their message length relative to context limits.
Proposed Solution
Implement automatic model loading on thread load. This would:
- Provide immediate, accurate token count feedback
- Improve user experience by eliminating manual model loading step
- Ensure users are aware of context limits before composing long messages
Impact
- Users cannot gauge message length against context limits
- Reduced usability when composing messages
- Requires manual model loading for basic functionality
🤖 Generated with Claude Code
Co-Authored-By: Claude [email protected]
Metadata
Metadata
Assignees
Labels
No labels
Type
Projects
Status
Eng Planning