Skip to content

Conversation

@qnixsynapse
Copy link
Contributor

@qnixsynapse qnixsynapse commented Sep 17, 2025

Describe Your Changes

  • Serve OpenAPI spec (static/openapi.json) directly from the proxy server.
  • Implement Swagger UI assets (swagger-ui.css, swagger-ui-bundle.js, favicon.ico) and a simple HTML wrapper under /docs.
  • Extend the proxy whitelist to include Swagger UI routes.
  • Add routing logic for /openapi.json, /docs, and Swagger UI static files.
  • Update whitelisted paths and integrate CORS handling for the new endpoints.

Fixes Issues

Self Checklist

  • Added relevant comments, esp in complex areas
  • Updated docs (for bug fixes / features)
  • Created issues for follow-up changes or refactoring needed

Important

Add Swagger UI to Jan API server, serving OpenAPI specs and static assets, with updated routing and CORS handling.

  • Behavior:
    • Serve OpenAPI spec from static/openapi.json via /openapi.json endpoint in proxy.rs.
    • Implement Swagger UI under /docs with static assets (swagger-ui.css, swagger-ui-bundle.js, favicon.ico).
    • Extend proxy whitelist to include Swagger UI routes in proxy.rs.
    • Add CORS handling for new endpoints in proxy.rs.
  • UI:
    • Update local-api-server.tsx to manage server start/stop with model loading state.
    • Add logic to handle API key validation and server status updates.
  • Misc:

This description was created by Ellipsis for 2e1d0a1. You can customize this summary. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 681f24f in 3 minutes and 18 seconds. Click for details.
  • Reviewed 810 lines of code in 3 files
  • Skipped 2 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/static/openapi.json:1
  • Draft comment:
    The OpenAPI spec is very comprehensive and includes both endpoints (/models, /chat/completion) and detailed schema definitions. Verify that the server URL ('http://localhost:1337/v1') and other info match your deployment. Consider splitting extremely large schema sections for improved maintainability if needed.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 30% <= threshold 50% The comment is mostly informative and asks the PR author to verify the server URL and other info, which violates the rules. However, the suggestion to consider splitting large schema sections for maintainability is a valid code suggestion.
2. src-tauri/static/swagger-ui/swagger-ui.css:1
  • Draft comment:
    The swagger-ui.css asset appears to be correctly included with the full styling and a sourceMapping comment. As a minor styling best-practice, consider adding a newline at the end of the file.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% This is a minified CSS file, which means it has been processed to remove unnecessary whitespace and newlines to reduce file size. Adding a newline at the end would go against the purpose of minification. Additionally, the file already has a sourcemap comment at the end, which is the standard way to end minified files. Could there be some build tools or systems that expect files to end with newlines? Could this cause any issues in version control systems? While some tools do prefer newlines at file ends, this is a minified file where the explicit goal is to minimize size. Any tool processing minified files should be able to handle files without trailing newlines. The sourcemap comment provides a clear file ending. The comment should be deleted as it suggests modifying a minified file in a way that contradicts the purpose of minification.

Workflow ID: wflow_4KvmoE84xUbwf2uh

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@github-actions
Copy link
Contributor

github-actions bot commented Sep 17, 2025

Barecheck - Code coverage report

Total: 32.45%

Your code coverage diff: -0.02% ▾

Uncovered files and lines
FileLines
web-app/src/routes/settings/local-api-server.tsx1-24, 27-29, 31-35, 37, 39-55, 57-63, 65-77, 79-81, 83-91, 94, 96-105, 108-113, 116-126, 128-129, 131, 133, 135-140, 142, 144-149, 151-152, 155-160, 163-165, 167-198, 200-210, 212-218, 220, 222-230, 232-248, 250-253, 256-265, 267-271, 273-274, 277-291, 293-294, 297-302, 304-328, 330-339, 341-346, 348-349, 352-363, 365-376, 378-383, 385

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.
The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.
@qnixsynapse qnixsynapse marked this pull request as ready for review September 18, 2025 12:48
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Changes requested ❌

Reviewed everything up to 13028c9 in 6 minutes and 28 seconds. Click for details.
  • Reviewed 809 lines of code in 3 files
  • Skipped 2 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/static/openapi.json:1
  • Draft comment:
    The OpenAPI spec looks complete and well‐formed, covering both the /models GET and /chat/completion POST endpoints. Ensure that this file is kept up to date as the API evolves.
  • Reason this comment was not posted:
    Confidence changes required: 0% <= threshold 50% None
2. src-tauri/static/swagger-ui/swagger-ui.css:2
  • Draft comment:
    Consider adding a final newline at the end of the CSS file for consistency with best practices.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While having a final newline is a common best practice, this appears to be a minified/compiled CSS file that was likely generated by a build tool. The sourcemap comment at the end is standard for minified files. Making manual edits to minified files is not recommended as they will be overwritten on the next build. The comment is suggesting a change that would be better handled at the build level if desired. The comment is technically correct that files should end with newlines, but am I being too dismissive of manually fixing this issue? Could there be value in having the newline even in a generated file? No - modifying generated/minified files directly is an anti-pattern. Any changes should be made to the source files or build configuration. The newline, while good practice, provides no functional benefit here. Delete the comment as it suggests manually modifying a generated file, which is not a good practice. If a final newline is desired, it should be configured in the build process that generates this file.

Workflow ID: wflow_FDV5q3HQ9mDW9X2r

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed 2e1d0a1 in 2 minutes and 14 seconds. Click for details.
  • Reviewed 221 lines of code in 8 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/routes/settings/local-api-server.tsx:200
  • Draft comment:
    Consider moving the declaration of 'isServerRunning' (defined at L220) before using it in getButtonText for better clarity.
  • Reason this comment was not posted:
    Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. web-app/src/routes/settings/local-api-server.tsx:133
  • Draft comment:
    The promise chain in toggleAPIServer is a bit nested; consider refactoring using async/await to improve readability and error handling.
  • Reason this comment was not posted:
    Comment was on unchanged code.
3. web-app/src/routes/settings/local-api-server.tsx:162
  • Draft comment:
    A hardcoded 500ms delay is used to allow backend state update; consider extracting this value as a named constant for clarity and easier tuning.
  • Reason this comment was not posted:
    Confidence changes required: 50% <= threshold 50% None

Workflow ID: wflow_idphUIbl2Bue8HxU

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@qnixsynapse qnixsynapse added this to the v0.7.0 milestone Sep 18, 2025
@qnixsynapse qnixsynapse moved this to In Progress in Jan Sep 18, 2025
Copy link
Contributor

@louis-jan louis-jan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@qnixsynapse qnixsynapse merged commit d1a8bdc into dev Sep 19, 2025
20 checks passed
@qnixsynapse qnixsynapse deleted the feat/5904 branch September 19, 2025 03:41
@github-project-automation github-project-automation bot moved this from In Progress to QA in Jan Sep 19, 2025
dinhlongviolin1 added a commit that referenced this pull request Sep 23, 2025
* fix: avoid error validate nested dom

* fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431)

* fix: correct context shift flag handling in LlamaCPP extension

The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature.
The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected.

* feat: detect model out of context during generation

---------

Co-authored-by: Dinh Long Nguyen <[email protected]>

* chore: add install-rust-targets step for macOS universal builds

* fix: make install-rust-targets a dependency

* enhancement: copy MCP permission

* chore: make action mutton capitalize

* Update web-app/src/locales/en/tool-approval.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* chore: simplify macos workflow

* fix: KVCache size calculation and refactor (#6438)

- Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size.
- Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation.
- Added proper clamping of user‑requested context lengths to the model’s maximum.
- Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode.
- Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient.
- Updated default context length handling and safety buffers to prevent OOM situations.
- Adjusted usable memory percentage to 90 % and refined logging for easier debugging.

* fix: detect allocation failures as out-of-memory errors (#6459)

The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory.
Adding this check ensures such messages are correctly classified as out‑of‑memory errors,
providing more accurate error handling CPU backends.

* fix: pathname file install BE

* fix: set default memory mode and clean up unused import (#6463)

Use fallback value 'high' for memory_util config and remove unused GgufMetadata import.

* fix: auto update should not block popup

* fix: remove log

* fix: imporove edit message with attachment image

* fix: imporove edit message with attachment image

* fix: type imageurl

* fix: immediate dropdown value update

* fix: linter

* fix/validate-mmproj-from-general-basename

* fix/revalidate-model-gguf

* fix: loader when importing

* fix/mcp-json-validation

* chore: update locale mcp json

* fix: new extension settings aren't populated properly (#6476)

* chore: embed webview2 bootstrapper in tauri windows

* fix: validat type mcp json

* chore: prevent click outside for edit dialog

* feat: add qa checklist

* chore: remove old checklist

* chore: correct typo in checklist

* fix: correct memory suitability checks in llamacpp extension (#6504)

The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results).
- Simplified import statement for `readGgufMetadata`.
- Fixed RAM/VRAM comparison by removing unnecessary parentheses.
- Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition.
- Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM.
- Updated comments for better readability and maintenance.

* fix: thread rerender issue

* chore: clean up console log

* chore: uncomment irrelevant fix

* fix: linter

* chore: remove duplicated block

* fix: tests

* Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows

fix: deeplink issue on Windows

* fix: reduce unnessary rerender due to current thread retrieval

* fix: reduce app layout rerender due to router state update

* fix: avoid the entire app layout re render on route change

* clean: unused import

* Merge pull request #6514 from menloresearch/feat/web-gtag

feat: Add GA Measurement and change keyboard bindings on web

* chore: update build tauri commands

* chore: remove unused task

* fix: should not rerender thread message components when typing

* fix re render issue

* direct tokenspeed access

* chore: sync latest

* feat: Add Jan API server Swagger UI (#6502)

* feat: Add Jan API server Swagger UI

- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server.
- Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`.
- Extend the proxy whitelist to include Swagger UI routes.
- Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files.
- Update whitelisted paths and integrate CORS handling for the new endpoints.

* feat: serve Swagger UI at root path

The Swagger UI endpoint previously lived under `/docs`. The route handling and
exclusion list have been updated so the UI is now served directly at `/`.
This simplifies access, aligns with the expected root URL in the Tauri
frontend, and removes the now‑unused `/docs` path handling.

* feat: add model loading state and translations for local API server

Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.

* fix: tests

* fix: linter

* fix: build

* docs: update changelog for v0.6.10

* fix(number-input): preserve '0.0x' format when typing (#6520)

* docs: update url for gifs and videos

* chore: update url for jan-v1 docs

* fix: Typo in openapi JSON (#6528)

* enhancement: toaster delete mcp server

* Update 2025-09-18-auto-optimize-vision-imports.mdx

* Merge pull request #6475 from menloresearch/feat/bump-tokenjs

feat: fix remote provider vision capability

* fix: prevent consecutive messages with same role (#6544)

* fix: prevent consecutive messages with same role

* fix: tests

* fix: first message should not be assistant

* fix: tests

* feat: Prompt progress when streaming (#6503)

* feat: Prompt progress when streaming

- BE changes:
    - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts.
    - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting.

* Make return_progress optional

* chore: update ui prompt progress before streaming content

* chore: remove log

* chore: remove progress when percentage >= 100

* chore: set timeout prompt progress

* chore: move prompt progress outside streaming content

* fix: tests

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* chore: add ci for web stag (#6550)

* feat: add getTokensCount method to compute token usage (#6467)

* feat: add getTokensCount method to compute token usage

Implemented a new async `getTokensCount` function in the LLaMA.cpp extension.
The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions.

* Fix: typos

* chore: update ui token usage

* chore: remove unused code

* feat: add image token handling for multimodal LlamaCPP models

Implemented support for counting image tokens when using vision-enabled models:
- Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file.
- Propagated `mmproj_path` from the Tauri plugin into the session info.
- Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension:
- Detects image content in messages.
- Reads GGUF metadata from `mmprojPath` to compute accurate image token counts.
- Provides a fallback estimation if metadata reading fails.
- Returns the sum of text and image tokens.
- Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`.
- Minor clean‑ups such as comment capitalization and debug logging.

* chore: update FE send params message include content type image_url

* fix mmproj path from session info and num tokens calculation

* fix: Correct image token estimation calculation in llamacpp extension

This commit addresses an inaccurate token count for images in the llama.cpp extension.

The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata.

Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility.

* fix per image calc

* fix: crash due to force unwrap

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Louis <[email protected]>

* fix: custom fetch for all providers (#6538)

* fix: custom fetch for all providers

* fix: run in development should use built-in fetch

* add full-width model names (#6350)

* fix: prevent relocation to root directories (#6547)

* fix: prevent relocation to root directories

* Update web-app/src/locales/zh-TW/settings.json

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

---------

Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>

* feat: web remote conversation (#6554)

* feat: implement conversation endpoint

* use conversation aware endpoint

* fetch message correctly

* preserve first message

* fix logout

* fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages

* add is dev tag

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: Akarshan Biswas <[email protected]>
Co-authored-by: Minh141120 <[email protected]>
Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Louis <[email protected]>
Co-authored-by: Bui Quang Huy <[email protected]>
Co-authored-by: Roushan Singh <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Alexey Haidamaka <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

feat: Add Swagger playground page to local API server

3 participants