-
Notifications
You must be signed in to change notification settings - Fork 2.3k
fix: prevent consecutive messages with same role #6544
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to c07abc6 in 1 minute and 55 seconds. Click for details.
- Reviewed
229lines of code in2files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/lib/messages.ts:85
- Draft comment:
In addUserMessage(), the logic pops a duplicate 'user' message instead of inserting a fallback. Verify that handling consecutive user messages differently from consecutive assistant messages is intentional. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. web-app/src/hooks/useChat.ts:87
- Draft comment:
Directly accessing state via getState() (e.g. useAssistant.getState()) inside callbacks is acceptable with Zustand, but ensure this pattern is intentional to always get fresh state rather than relying on hook subscriptions. - Reason this comment was not posted:
Confidence changes required:30%<= threshold50%None
Workflow ID: wflow_MOkTinqRvrdLDi2j
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed c613752 in 1 minute and 52 seconds. Click for details.
- Reviewed
619lines of code in5files - Skipped
0files when reviewing. - Skipped posting
5draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/components/ui/__tests__/sheet.test.tsx:55
- Draft comment:
Added in SheetContent tests to improve component coverage. Consider adding style assertions if needed. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%The comment is purely informative, mentioning the addition of in tests. It suggests considering style assertions, but this is not a specific request or suggestion. It doesn't ask for a specific test to be added or a specific change to be made, which would make it actionable.
2. web-app/src/containers/__tests__/ChatInput.test.tsx:191
- Draft comment:
Wrapping render and event calls in async act() improves test reliability and handles asynchronous behavior correctly. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining the benefits of using async act() in tests. It doesn't provide a specific suggestion or ask for a change in the code.
3. web-app/src/hooks/__tests__/useChat.instructions.test.ts:173
- Draft comment:
The try-catch wrapper around the async act call aids in logging errors during message processing. Consider asserting on errors if applicable for stricter test validation. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The test's primary purpose is to verify date replacement in instructions. The try-catch seems to be added for debugging purposes, not as a core part of the test assertions. Adding error assertions would shift focus from the main test purpose and could make the test more brittle. The current error logging is sufficient for debugging. The comment has a point that error handling could be more robust. Silently catching and logging errors might hide important test failures. However, this test is specifically about instruction rendering. Adding error assertions would be scope creep and could make the test less maintainable. The comment should be removed as it suggests adding complexity that's not aligned with the test's core purpose of verifying instruction rendering.
4. web-app/src/hooks/__tests__/useChat.test.ts:59
- Draft comment:
Refactoring the useModelProvider mock with Object.assign to include getState ensures consistency across tests, which is a good improvement. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that a change is a good improvement, which violates the rule against making purely informative comments.
5. web-app/src/lib/__tests__/messages.test.ts:64
- Draft comment:
Updated CompletionMessagesBuilder tests now account for filler messages inserted between consecutive messages with the same role. The revised expected array lengths and role orders (with a filler message having role 'assistant' and content '.') reflect the intended alternating behavior. Verify consistency in filler message content for edge cases. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining what changes were made to the tests and what they now account for. It does not provide a specific suggestion or ask for a specific confirmation about the code. It also asks to verify consistency, which is against the rules.
Workflow ID: wflow_yOTuKxd7QRjKZfKn
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Barecheck - Code coverage reportTotal: 30.56%Your code coverage diff: -0.19% ▾ Uncovered files and lines
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 8af250a in 1 minute and 30 seconds. Click for details.
- Reviewed
50lines of code in1files - Skipped
0files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/lib/messages.ts:169
- Draft comment:
The block handling the first message forces a fallback user message if the initial message isn’t 'user' or 'system'. Please confirm that inserting a fallback ('.') for non-user/system first messages (e.g. 'assistant') aligns with the desired conversation flow. Also, consider adding a comment to clarify this edge-case behavior. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
2. web-app/src/lib/messages.ts:194
- Draft comment:
When consecutive messages share the same role, a fallback message with the opposite role and content '.' is inserted. Note that the logic uses a ternary operator (prevRole === 'assistant' ? 'user' : 'assistant'), which assumes only 'user' and 'assistant' roles. If additional roles (e.g. 'system') occur later, this may behave unexpectedly. Please consider documenting the fallback content and logic. - Reason this comment was not posted:
Confidence changes required:33%<= threshold50%None
Workflow ID: wflow_z0Lfza3dZDzpN7NH
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed a749b05 in 1 minute and 58 seconds. Click for details.
- Reviewed
252lines of code in1files - Skipped
0files when reviewing. - Skipped posting
7draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/lib/__tests__/messages.test.ts:47
- Draft comment:
The system instruction test now correctly expects a filler message (user role with '.') immediately after the system message, ensuring role alternation. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. web-app/src/lib/__tests__/messages.test.ts:68
- Draft comment:
The 'should filter out messages with errors' test now expects three messages: the valid first user message, an inserted filler (assistant role), and the second user message, which properly prevents consecutive same-role messages. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
3. web-app/src/lib/__tests__/messages.test.ts:177
- Draft comment:
The assistant message test correctly inserts a filler user message before adding the normalized assistant message, ensuring proper alternating roles. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
4. web-app/src/lib/__tests__/messages.test.ts:305
- Draft comment:
The multiple tool messages test now expects four messages with appropriate filler messages inserted between consecutive tool messages, which ensures alternation (initial filler 'user' and filler 'assistant' between tool messages). - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
5. web-app/src/lib/__tests__/messages.test.ts:348
- Draft comment:
Normalization tests (e.g. 'should remove thinking content from the beginning') have been adjusted to check index 1 for the assistant message, accounting for the inserted filler message at index 0. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
6. web-app/src/lib/__tests__/messages.test.ts:532
- Draft comment:
The integration test for complex conversation flow now verifies a sequence of 7 messages with filler messages correctly inserted to alternate roles, reflecting complex interactions properly. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
7. web-app/src/lib/__tests__/messages.test.ts:549
- Draft comment:
The test for empty thread messages with a system instruction now correctly expects a filler user message following the system message. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
Workflow ID: wflow_ZYJg7XJC7F06UO8N
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
* fix: avoid error validate nested dom * fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431) * fix: correct context shift flag handling in LlamaCPP extension The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature. The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected. * feat: detect model out of context during generation --------- Co-authored-by: Dinh Long Nguyen <[email protected]> * chore: add install-rust-targets step for macOS universal builds * fix: make install-rust-targets a dependency * enhancement: copy MCP permission * chore: make action mutton capitalize * Update web-app/src/locales/en/tool-approval.json Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * chore: simplify macos workflow * fix: KVCache size calculation and refactor (#6438) - Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size. - Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation. - Added proper clamping of user‑requested context lengths to the model’s maximum. - Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode. - Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient. - Updated default context length handling and safety buffers to prevent OOM situations. - Adjusted usable memory percentage to 90 % and refined logging for easier debugging. * fix: detect allocation failures as out-of-memory errors (#6459) The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory. Adding this check ensures such messages are correctly classified as out‑of‑memory errors, providing more accurate error handling CPU backends. * fix: pathname file install BE * fix: set default memory mode and clean up unused import (#6463) Use fallback value 'high' for memory_util config and remove unused GgufMetadata import. * fix: auto update should not block popup * fix: remove log * fix: imporove edit message with attachment image * fix: imporove edit message with attachment image * fix: type imageurl * fix: immediate dropdown value update * fix: linter * fix/validate-mmproj-from-general-basename * fix/revalidate-model-gguf * fix: loader when importing * fix/mcp-json-validation * chore: update locale mcp json * fix: new extension settings aren't populated properly (#6476) * chore: embed webview2 bootstrapper in tauri windows * fix: validat type mcp json * chore: prevent click outside for edit dialog * feat: add qa checklist * chore: remove old checklist * chore: correct typo in checklist * fix: correct memory suitability checks in llamacpp extension (#6504) The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results). - Simplified import statement for `readGgufMetadata`. - Fixed RAM/VRAM comparison by removing unnecessary parentheses. - Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition. - Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM. - Updated comments for better readability and maintenance. * fix: thread rerender issue * chore: clean up console log * chore: uncomment irrelevant fix * fix: linter * chore: remove duplicated block * fix: tests * Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows fix: deeplink issue on Windows * fix: reduce unnessary rerender due to current thread retrieval * fix: reduce app layout rerender due to router state update * fix: avoid the entire app layout re render on route change * clean: unused import * Merge pull request #6514 from menloresearch/feat/web-gtag feat: Add GA Measurement and change keyboard bindings on web * chore: update build tauri commands * chore: remove unused task * fix: should not rerender thread message components when typing * fix re render issue * direct tokenspeed access * chore: sync latest * feat: Add Jan API server Swagger UI (#6502) * feat: Add Jan API server Swagger UI - Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server. - Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`. - Extend the proxy whitelist to include Swagger UI routes. - Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files. - Update whitelisted paths and integrate CORS handling for the new endpoints. * feat: serve Swagger UI at root path The Swagger UI endpoint previously lived under `/docs`. The route handling and exclusion list have been updated so the UI is now served directly at `/`. This simplifies access, aligns with the expected root URL in the Tauri frontend, and removes the now‑unused `/docs` path handling. * feat: add model loading state and translations for local API server Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization. * fix: tests * fix: linter * fix: build * docs: update changelog for v0.6.10 * fix(number-input): preserve '0.0x' format when typing (#6520) * docs: update url for gifs and videos * chore: update url for jan-v1 docs * fix: Typo in openapi JSON (#6528) * enhancement: toaster delete mcp server * Update 2025-09-18-auto-optimize-vision-imports.mdx * Merge pull request #6475 from menloresearch/feat/bump-tokenjs feat: fix remote provider vision capability * fix: prevent consecutive messages with same role (#6544) * fix: prevent consecutive messages with same role * fix: tests * fix: first message should not be assistant * fix: tests * feat: Prompt progress when streaming (#6503) * feat: Prompt progress when streaming - BE changes: - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts. - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting. * Make return_progress optional * chore: update ui prompt progress before streaming content * chore: remove log * chore: remove progress when percentage >= 100 * chore: set timeout prompt progress * chore: move prompt progress outside streaming content * fix: tests --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Louis <[email protected]> * chore: add ci for web stag (#6550) * feat: add getTokensCount method to compute token usage (#6467) * feat: add getTokensCount method to compute token usage Implemented a new async `getTokensCount` function in the LLaMA.cpp extension. The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions. * Fix: typos * chore: update ui token usage * chore: remove unused code * feat: add image token handling for multimodal LlamaCPP models Implemented support for counting image tokens when using vision-enabled models: - Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file. - Propagated `mmproj_path` from the Tauri plugin into the session info. - Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension: - Detects image content in messages. - Reads GGUF metadata from `mmprojPath` to compute accurate image token counts. - Provides a fallback estimation if metadata reading fails. - Returns the sum of text and image tokens. - Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`. - Minor clean‑ups such as comment capitalization and debug logging. * chore: update FE send params message include content type image_url * fix mmproj path from session info and num tokens calculation * fix: Correct image token estimation calculation in llamacpp extension This commit addresses an inaccurate token count for images in the llama.cpp extension. The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata. Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility. * fix per image calc * fix: crash due to force unwrap --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Louis <[email protected]> * fix: custom fetch for all providers (#6538) * fix: custom fetch for all providers * fix: run in development should use built-in fetch * add full-width model names (#6350) * fix: prevent relocation to root directories (#6547) * fix: prevent relocation to root directories * Update web-app/src/locales/zh-TW/settings.json Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> --------- Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * feat: web remote conversation (#6554) * feat: implement conversation endpoint * use conversation aware endpoint * fetch message correctly * preserve first message * fix logout * fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages * add is dev tag --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Akarshan Biswas <[email protected]> Co-authored-by: Minh141120 <[email protected]> Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Nguyen Ngoc Minh <[email protected]> Co-authored-by: Louis <[email protected]> Co-authored-by: Bui Quang Huy <[email protected]> Co-authored-by: Roushan Singh <[email protected]> Co-authored-by: hiento09 <[email protected]> Co-authored-by: Alexey Haidamaka <[email protected]>



Describe Your Changes
This PR fixes an issue where two consecutive messages with the same role (user or assistant) could appear in the conversation history.
Changes
Fixes Issues
Self Checklist
Important
Fixes consecutive message role issue by inserting fallback messages to ensure alternation between user and assistant roles in
CompletionMessagesBuilder.CompletionMessagesBuilderinmessages.ts, insert a fallback message.with the opposite role when two consecutive messages share the same role.messages.test.tsto verify insertion of fallback messages between consecutive messages of the same role.useChat.test.tsanduseChat.instructions.test.tsto align with new message alternation behavior.ChatInput.test.tsxandsheet.test.tsxto ensure consistent behavior with new message handling.This description was created by
for a749b05. You can customize this summary. It will automatically update as commits are pushed.