-
Notifications
You must be signed in to change notification settings - Fork 2.4k
feat: Add Jan API server Swagger UI #6502
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 681f24f in 3 minutes and 18 seconds. Click for details.
- Reviewed
810lines of code in3files - Skipped
2files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/static/openapi.json:1
- Draft comment:
The OpenAPI spec is very comprehensive and includes both endpoints (/models, /chat/completion) and detailed schema definitions. Verify that the server URL ('http://localhost:1337/v1') and other info match your deployment. Consider splitting extremely large schema sections for improved maintainability if needed. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =30%<= threshold50%The comment is mostly informative and asks the PR author to verify the server URL and other info, which violates the rules. However, the suggestion to consider splitting large schema sections for maintainability is a valid code suggestion.
2. src-tauri/static/swagger-ui/swagger-ui.css:1
- Draft comment:
The swagger-ui.css asset appears to be correctly included with the full styling and a sourceMapping comment. As a minor styling best-practice, consider adding a newline at the end of the file. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 0% vs. threshold = 50% This is a minified CSS file, which means it has been processed to remove unnecessary whitespace and newlines to reduce file size. Adding a newline at the end would go against the purpose of minification. Additionally, the file already has a sourcemap comment at the end, which is the standard way to end minified files. Could there be some build tools or systems that expect files to end with newlines? Could this cause any issues in version control systems? While some tools do prefer newlines at file ends, this is a minified file where the explicit goal is to minimize size. Any tool processing minified files should be able to handle files without trailing newlines. The sourcemap comment provides a clear file ending. The comment should be deleted as it suggests modifying a minified file in a way that contradicts the purpose of minification.
Workflow ID: wflow_4KvmoE84xUbwf2uh
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Barecheck - Code coverage reportTotal: 32.45%Your code coverage diff: -0.02% ▾ Uncovered files and lines
|
681f24f to
dcb7eaa
Compare
- Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server. - Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`. - Extend the proxy whitelist to include Swagger UI routes. - Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files. - Update whitelisted paths and integrate CORS handling for the new endpoints.
dcb7eaa to
c802c14
Compare
The Swagger UI endpoint previously lived under `/docs`. The route handling and exclusion list have been updated so the UI is now served directly at `/`. This simplifies access, aligns with the expected root URL in the Tauri frontend, and removes the now‑unused `/docs` path handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Changes requested ❌
Reviewed everything up to 13028c9 in 6 minutes and 28 seconds. Click for details.
- Reviewed
809lines of code in3files - Skipped
2files when reviewing. - Skipped posting
2draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/static/openapi.json:1
- Draft comment:
The OpenAPI spec looks complete and well‐formed, covering both the /models GET and /chat/completion POST endpoints. Ensure that this file is kept up to date as the API evolves. - Reason this comment was not posted:
Confidence changes required:0%<= threshold50%None
2. src-tauri/static/swagger-ui/swagger-ui.css:2
- Draft comment:
Consider adding a final newline at the end of the CSS file for consistency with best practices. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% While having a final newline is a common best practice, this appears to be a minified/compiled CSS file that was likely generated by a build tool. The sourcemap comment at the end is standard for minified files. Making manual edits to minified files is not recommended as they will be overwritten on the next build. The comment is suggesting a change that would be better handled at the build level if desired. The comment is technically correct that files should end with newlines, but am I being too dismissive of manually fixing this issue? Could there be value in having the newline even in a generated file? No - modifying generated/minified files directly is an anti-pattern. Any changes should be made to the source files or build configuration. The newline, while good practice, provides no functional benefit here. Delete the comment as it suggests manually modifying a generated file, which is not a good practice. If a final newline is desired, it should be configured in the build process that generates this file.
Workflow ID: wflow_FDV5q3HQ9mDW9X2r
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 2e1d0a1 in 2 minutes and 14 seconds. Click for details.
- Reviewed
221lines of code in8files - Skipped
0files when reviewing. - Skipped posting
3draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/routes/settings/local-api-server.tsx:200
- Draft comment:
Consider moving the declaration of 'isServerRunning' (defined at L220) before using it in getButtonText for better clarity. - Reason this comment was not posted:
Comment was not on a location in the diff, so it can't be submitted as a review comment.
2. web-app/src/routes/settings/local-api-server.tsx:133
- Draft comment:
The promise chain in toggleAPIServer is a bit nested; consider refactoring using async/await to improve readability and error handling. - Reason this comment was not posted:
Comment was on unchanged code.
3. web-app/src/routes/settings/local-api-server.tsx:162
- Draft comment:
A hardcoded 500ms delay is used to allow backend state update; consider extracting this value as a named constant for clarity and easier tuning. - Reason this comment was not posted:
Confidence changes required:50%<= threshold50%None
Workflow ID: wflow_idphUIbl2Bue8HxU
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* fix: avoid error validate nested dom * fix: correct context shift flag handling in LlamaCPP extension (#6404) (#6431) * fix: correct context shift flag handling in LlamaCPP extension The previous implementation added the `--no-context-shift` flag when `cfg.ctx_shift` was disabled, which conflicted with the llama.cpp CLI where the presence of `--context-shift` enables the feature. The logic is updated to push `--context-shift` only when `cfg.ctx_shift` is true, ensuring the extension passes the correct argument and behaves as expected. * feat: detect model out of context during generation --------- Co-authored-by: Dinh Long Nguyen <[email protected]> * chore: add install-rust-targets step for macOS universal builds * fix: make install-rust-targets a dependency * enhancement: copy MCP permission * chore: make action mutton capitalize * Update web-app/src/locales/en/tool-approval.json Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * chore: simplify macos workflow * fix: KVCache size calculation and refactor (#6438) - Removed the unused `getKVCachePerToken` helper and replaced it with a unified `estimateKVCache` that returns both total size and per‑token size. - Fixed the KV cache size calculation to account for all layers, correcting previous under‑estimation. - Added proper clamping of user‑requested context lengths to the model’s maximum. - Refactored VRAM budgeting: introduced explicit reserves, fixed engine overhead, and separate multipliers for VRAM and system RAM based on memory mode. - Implemented a more robust planning flow with clear GPU, Hybrid, and CPU pathways, including fallback configurations when resources are insufficient. - Updated default context length handling and safety buffers to prevent OOM situations. - Adjusted usable memory percentage to 90 % and refined logging for easier debugging. * fix: detect allocation failures as out-of-memory errors (#6459) The Llama.cpp backend can emit the phrase “failed to allocate” when it runs out of memory. Adding this check ensures such messages are correctly classified as out‑of‑memory errors, providing more accurate error handling CPU backends. * fix: pathname file install BE * fix: set default memory mode and clean up unused import (#6463) Use fallback value 'high' for memory_util config and remove unused GgufMetadata import. * fix: auto update should not block popup * fix: remove log * fix: imporove edit message with attachment image * fix: imporove edit message with attachment image * fix: type imageurl * fix: immediate dropdown value update * fix: linter * fix/validate-mmproj-from-general-basename * fix/revalidate-model-gguf * fix: loader when importing * fix/mcp-json-validation * chore: update locale mcp json * fix: new extension settings aren't populated properly (#6476) * chore: embed webview2 bootstrapper in tauri windows * fix: validat type mcp json * chore: prevent click outside for edit dialog * feat: add qa checklist * chore: remove old checklist * chore: correct typo in checklist * fix: correct memory suitability checks in llamacpp extension (#6504) The previous implementation mixed model size and VRAM checks, leading to inaccurate status reporting (e.g., false RED results). - Simplified import statement for `readGgufMetadata`. - Fixed RAM/VRAM comparison by removing unnecessary parentheses. - Replaced ambiguous `modelSize > usableTotalMemory` check with a clear `totalRequired > usableTotalMemory` hard‑limit condition. - Refactored the status logic to explicitly handle the CPU‑GPU hybrid scenario, returning **YELLOW** when the total requirement fits combined memory but exceeds VRAM. - Updated comments for better readability and maintenance. * fix: thread rerender issue * chore: clean up console log * chore: uncomment irrelevant fix * fix: linter * chore: remove duplicated block * fix: tests * Merge pull request #6469 from menloresearch/fix/deeplink-not-work-on-windows fix: deeplink issue on Windows * fix: reduce unnessary rerender due to current thread retrieval * fix: reduce app layout rerender due to router state update * fix: avoid the entire app layout re render on route change * clean: unused import * Merge pull request #6514 from menloresearch/feat/web-gtag feat: Add GA Measurement and change keyboard bindings on web * chore: update build tauri commands * chore: remove unused task * fix: should not rerender thread message components when typing * fix re render issue * direct tokenspeed access * chore: sync latest * feat: Add Jan API server Swagger UI (#6502) * feat: Add Jan API server Swagger UI - Serve OpenAPI spec (`static/openapi.json`) directly from the proxy server. - Implement Swagger UI assets (`swagger-ui.css`, `swagger-ui-bundle.js`, `favicon.ico`) and a simple HTML wrapper under `/docs`. - Extend the proxy whitelist to include Swagger UI routes. - Add routing logic for `/openapi.json`, `/docs`, and Swagger UI static files. - Update whitelisted paths and integrate CORS handling for the new endpoints. * feat: serve Swagger UI at root path The Swagger UI endpoint previously lived under `/docs`. The route handling and exclusion list have been updated so the UI is now served directly at `/`. This simplifies access, aligns with the expected root URL in the Tauri frontend, and removes the now‑unused `/docs` path handling. * feat: add model loading state and translations for local API server Implemented a loading indicator for model startup, updated the start/stop button to reflect model loading and server starting states, and disabled interactions while pending. Added new translation keys (`loadingModel`, `startingServer`) across all supported locales (en, de, id, pl, vn, zh-CN, zh-TW) and integrated them into the UI. Included a small delay after model start to ensure backend state consistency. This improves user feedback and prevents race conditions during server initialization. * fix: tests * fix: linter * fix: build * docs: update changelog for v0.6.10 * fix(number-input): preserve '0.0x' format when typing (#6520) * docs: update url for gifs and videos * chore: update url for jan-v1 docs * fix: Typo in openapi JSON (#6528) * enhancement: toaster delete mcp server * Update 2025-09-18-auto-optimize-vision-imports.mdx * Merge pull request #6475 from menloresearch/feat/bump-tokenjs feat: fix remote provider vision capability * fix: prevent consecutive messages with same role (#6544) * fix: prevent consecutive messages with same role * fix: tests * fix: first message should not be assistant * fix: tests * feat: Prompt progress when streaming (#6503) * feat: Prompt progress when streaming - BE changes: - Add a `return_progress` flag to `chatCompletionRequest` and a corresponding `prompt_progress` payload in `chatCompletionChunk`. Introduce `chatCompletionPromptProgress` interface to capture cache, processed, time, and total token counts. - Update the Llamacpp extension to always request progress data when streaming, enabling UI components to display real‑time generation progress and leverage llama.cpp’s built‑in progress reporting. * Make return_progress optional * chore: update ui prompt progress before streaming content * chore: remove log * chore: remove progress when percentage >= 100 * chore: set timeout prompt progress * chore: move prompt progress outside streaming content * fix: tests --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Louis <[email protected]> * chore: add ci for web stag (#6550) * feat: add getTokensCount method to compute token usage (#6467) * feat: add getTokensCount method to compute token usage Implemented a new async `getTokensCount` function in the LLaMA.cpp extension. The method validates the model session, checks process health, applies the request template, and tokenizes the resulting prompt to return the token count. Includes detailed error handling for crashed models and API failures, enabling callers to assess token usage before sending completions. * Fix: typos * chore: update ui token usage * chore: remove unused code * feat: add image token handling for multimodal LlamaCPP models Implemented support for counting image tokens when using vision-enabled models: - Extended `SessionInfo` with optional `mmprojPath` to store the multimodal project file. - Propagated `mmproj_path` from the Tauri plugin into the session info. - Added import of `chatCompletionRequestMessage` and enhanced token calculation logic in the LlamaCPP extension: - Detects image content in messages. - Reads GGUF metadata from `mmprojPath` to compute accurate image token counts. - Provides a fallback estimation if metadata reading fails. - Returns the sum of text and image tokens. - Introduced helper methods `calculateImageTokens` and `estimateImageTokensFallback`. - Minor clean‑ups such as comment capitalization and debug logging. * chore: update FE send params message include content type image_url * fix mmproj path from session info and num tokens calculation * fix: Correct image token estimation calculation in llamacpp extension This commit addresses an inaccurate token count for images in the llama.cpp extension. The previous logic incorrectly calculated the token count based on image patch size and dimensions. This has been replaced with a more precise method that uses the clip.vision.projection_dim value from the model metadata. Additionally, unnecessary debug logging was removed, and a new log was added to show the mmproj metadata for improved visibility. * fix per image calc * fix: crash due to force unwrap --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Louis <[email protected]> * fix: custom fetch for all providers (#6538) * fix: custom fetch for all providers * fix: run in development should use built-in fetch * add full-width model names (#6350) * fix: prevent relocation to root directories (#6547) * fix: prevent relocation to root directories * Update web-app/src/locales/zh-TW/settings.json Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> --------- Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * feat: web remote conversation (#6554) * feat: implement conversation endpoint * use conversation aware endpoint * fetch message correctly * preserve first message * fix logout * fix broadcast issue locally + auth not refreshing profile on other tabs+ clean up and sync messages * add is dev tag --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: Akarshan Biswas <[email protected]> Co-authored-by: Minh141120 <[email protected]> Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> Co-authored-by: Nguyen Ngoc Minh <[email protected]> Co-authored-by: Louis <[email protected]> Co-authored-by: Bui Quang Huy <[email protected]> Co-authored-by: Roushan Singh <[email protected]> Co-authored-by: hiento09 <[email protected]> Co-authored-by: Alexey Haidamaka <[email protected]>
Describe Your Changes
static/openapi.json) directly from the proxy server.swagger-ui.css,swagger-ui-bundle.js,favicon.ico) and a simple HTML wrapper under/docs./openapi.json,/docs, and Swagger UI static files.Fixes Issues
Self Checklist
Important
Add Swagger UI to Jan API server, serving OpenAPI specs and static assets, with updated routing and CORS handling.
static/openapi.jsonvia/openapi.jsonendpoint inproxy.rs./docswith static assets (swagger-ui.css,swagger-ui-bundle.js,favicon.ico).proxy.rs.proxy.rs.local-api-server.tsxto manage server start/stop with model loading state.This description was created by
for 2e1d0a1. You can customize this summary. It will automatically update as commits are pushed.