Skip to content

Conversation

@louis-jan
Copy link
Contributor

@louis-jan louis-jan commented Sep 22, 2025

Describe Your Changes

This PR fixes chat/completions requests for other providers to avoid preflight requests.

In this PR, I've also added a DEV flag so that requests will use the built-in fetch method for easier debugging.

Fixes Issues

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

Important

Fixes preflight requests for chat/completions by using a custom fetch method and adds a development flag for debugging.

  • Behavior:
    • In completion.ts, modified sendCompletion() to use IS_DEV flag to choose between built-in fetch and custom fetch from getServiceHub().providers().fetch().
    • Added IS_DEV flag in vite.config.ts and global.d.ts for environment configuration.
  • Configuration:
    • Updated tauri.conf.json to include IS_DEV=true in beforeDevCommand.
    • Updated package.json to use [email protected].
  • Misc:
    • Minor formatting changes in completion.ts for better readability.

This description was created by Ellipsis for eff85a2. You can customize this summary. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to cba7deb in 1 minute and 41 seconds. Click for details.
  • Reviewed 99 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 5 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/tauri.conf.json:9
  • Draft comment:
    Addition of the IS_DEV flag is straightforward. Ensure that the consuming code interprets IS_DEV as a boolean (not a non-empty string) to avoid accidental truthiness.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% The comment starts with "Ensure that..." which is a red flag per the rules. It's asking the author to verify something rather than pointing out a clear issue. We can't see the consuming code, so we don't know if there's actually a problem. The comment is speculative about potential issues rather than identifying a concrete problem. The comment raises a valid technical concern about type coercion in environment variables. Maybe there's a known issue in the codebase with boolean env vars. Without seeing the consuming code, we can't verify if this is actually an issue. The rules clearly state we should delete speculative comments and those asking for verification. Delete the comment. It violates multiple rules by being speculative and asking for verification, rather than pointing out a concrete issue.
2. web-app/src/lib/completion.ts:177
  • Draft comment:
    Using a ternary with IS_DEV may be risky if IS_DEV isn’t a true boolean. Consider ensuring that IS_DEV is parsed as a boolean to avoid always using the built-in fetch.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% While the comment identifies a potential issue with boolean coercion, there's no evidence that IS_DEV is actually problematic. We don't see the definition of IS_DEV, so we can't be sure there's an issue. The suggestion is speculative without seeing how IS_DEV is defined. The current code would work fine as long as IS_DEV follows normal JavaScript truthy/falsy rules. I might be underestimating the importance of explicit boolean conversion. Implicit boolean coercion can sometimes lead to subtle bugs. While explicit boolean conversion is generally good practice, without seeing evidence that IS_DEV actually causes problems, this comment is speculative and violates our rule about not making speculative comments. Delete the comment because it makes a speculative suggestion without clear evidence of an actual problem.
3. web-app/src/lib/completion.ts:187
  • Draft comment:
    Overriding fetch for localhost URLs is appropriate to attach custom headers. The formatting change (removing quotes from 'Origin') is stylistic but acceptable.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
4. web-app/src/types/global.d.ts:25
  • Draft comment:
    Adding a global declaration for IS_DEV enhances type safety. Ensure the injected value from the build is a proper boolean.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
5. web-app/vite.config.ts:42
  • Draft comment:
    Defining IS_DEV using JSON.stringify may leave it as a string. Consider converting process.env.IS_DEV explicitly to a boolean to avoid truthiness issues (e.g., string 'false' is truthy).
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% This is a Vite config file where define is used for compile-time constants. JSON.stringify is actually the recommended way to handle values in Vite's define option - it's mentioned in Vite's docs. The value will be inserted as-is into the code during build. Even if IS_DEV comes in as a string, JSON.stringify will preserve its string form which is likely intentional for the build process. I might be wrong about the implications of stringified environment variables in Vite's define section. There could be real runtime issues with string vs boolean types. However, this pattern is used consistently throughout the file for all boolean flags, including IS_TAURI and platform checks. If this was a real issue, it would affect all of these flags. The code appears to follow Vite's recommended patterns. The comment should be deleted. It suggests deviating from Vite's standard patterns without strong evidence that there's a real problem.

Workflow ID: wflow_jiW6Ud43jUjIPD37

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@louis-jan louis-jan changed the title Fix/custom fetch for all providers fix: custom fetch for all providers Sep 22, 2025
@github-actions
Copy link
Contributor

github-actions bot commented Sep 22, 2025

Barecheck - Code coverage report

Total: 30.77%

Your code coverage diff: -0.01% ▾

Uncovered files and lines
FileLines
web-app/vite.config.ts1-8, 11, 13, 15-60, 62, 64-73, 78, 80-91, 93-97
web-app/src/lib/completion.ts74-85, 156-164, 166, 168-169, 171-173, 175, 177, 180-185, 187-194, 196-206, 209-217, 219, 221-236, 238-259, 275-281, 332, 335-340, 342-345, 361-366, 371-372, 374-396, 399-422, 424-429, 431, 433-453, 455-464, 466-480, 482-485

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed eff85a2 in 40 seconds. Click for details.
  • Reviewed 13 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/package.json:82
  • Draft comment:
    Bump of token.js-fork from 0.7.23 to 0.7.25 looks appropriate for the custom fetch fix. Ensure that the new version indeed includes the improvements for handling custom fetch (and avoids CORS preflight issues) and that no breaking changes affect other parts of the app.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% The comment is related to a dependency change, specifically a version bump of a library. It asks the PR author to ensure that the new version includes certain improvements and that no breaking changes affect other parts of the app. This falls under the rule of not asking the PR author to ensure behavior or test changes, which is not allowed.

Workflow ID: wflow_Dho98j4e2zG3rcv6

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@louis-jan louis-jan force-pushed the fix/custom-fetch-for-all-providers branch from eff85a2 to cba7deb Compare September 22, 2025 04:26
@louis-jan louis-jan merged commit 568ee85 into dev Sep 23, 2025
41 checks passed
@github-project-automation github-project-automation bot moved this to QA in Jan Sep 23, 2025
@louis-jan louis-jan deleted the fix/custom-fetch-for-all-providers branch September 23, 2025 02:55
@github-actions github-actions bot added this to the v0.7.0 milestone Sep 23, 2025
dinhlongviolin1 added a commit that referenced this pull request Sep 23, 2025
* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <[email protected]>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Bui Quang Huy <[email protected]>
Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Alexey Haidamaka <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

bug: Authentication error when using Anthropic API

3 participants