-
Notifications
You must be signed in to change notification settings - Fork 469
Add MCP server endpoints #1453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MCP server endpoints #1453
Conversation
WalkthroughA new optional MCP server feature is introduced to the project. The changes add the Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant MainServer
participant MCPServer
participant HttpMcpHandler
participant SharedState
User->>MainServer: Start with --mcp_port argument
MainServer->>MCPServer: Spawn MCP server on host:port
MCPServer->>HttpMcpHandler: Receive JSON-RPC CallToolRequest (tool="chat")
HttpMcpHandler->>SharedState: Send ChatCompletionRequest
SharedState-->>HttpMcpHandler: Return response via channel
HttpMcpHandler->>MCPServer: Return CallToolResult or error
MCPServer->>User: Respond with JSON-RPC result
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
⏰ Context from checks skipped due to timeout of 90000ms (8)
🔇 Additional comments (6)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Code Metrics Report=============================================================================== Language Files Lines Code Comments Blanks =============================================================================== C Header 3 62 53 0 9 CSS 1 473 408 14 51 Dockerfile 1 42 23 10 9 HTML 1 78 64 5 9 JavaScript 7 1397 1068 180 149 JSON 14 123 122 0 1 Makefile 1 6 5 0 1 Python 90 4250 3587 165 498 Shell 1 63 26 18 19 Plain Text 3 3723 0 2413 1310 TOML 22 797 733 11 53 YAML 2 21 19 2 0 ------------------------------------------------------------------------------- Jupyter Notebooks 3 0 0 0 0 |- Markdown 2 77 32 31 14 |- Python 2 205 178 1 26 (Total) 282 210 32 40 ------------------------------------------------------------------------------- Markdown 61 5289 0 4037 1252 |- BASH 11 123 117 2 4 |- JSON 3 59 59 0 0 |- Python 7 156 140 0 16 |- Rust 22 796 667 1 128 |- TOML 2 75 63 0 12 (Total) 6498 1046 4040 1412 ------------------------------------------------------------------------------- Rust 385 135707 120601 3045 12061 |- Markdown 179 3010 25 2676 309 (Total) 138717 120626 5721 12370 =============================================================================== Total 595 152031 126709 9900 15422 =============================================================================== |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (1)
mistralrs-server/src/main.rs (1)
208-219: Consider improving error handling for the MCP server.While the current implementation logs errors to stderr, consider using the tracing framework for consistent logging across the application. Additionally, you might want to propagate critical MCP server failures to the main application.
- if let Some(port) = args.mcp_port { - let host = args - .serve_ip - .clone() - .unwrap_or_else(|| "0.0.0.0".to_string()); - let mcp_server = mcp_server::create_mcp_server(mistralrs.clone(), host, port); - tokio::spawn(async move { - if let Err(e) = mcp_server.start().await { - eprintln!("MCP server error: {e}"); - } - }); - } + if let Some(port) = args.mcp_port { + let host = args + .serve_ip + .clone() + .unwrap_or_else(|| "0.0.0.0".to_string()); + let mcp_server = mcp_server::create_mcp_server(mistralrs.clone(), host.clone(), port); + tokio::spawn(async move { + info!("Starting MCP server on {host}:{port}"); + if let Err(e) = mcp_server.start().await { + tracing::error!("MCP server error: {e}"); + } + }); + }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
Cargo.toml(1 hunks)mistralrs-server/Cargo.toml(2 hunks)mistralrs-server/src/main.rs(4 hunks)mistralrs-server/src/mcp_server.rs(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Check (ubuntu-latest, stable)
- GitHub Check: comment
- GitHub Check: Test Suite (ubuntu-latest, stable)
- GitHub Check: Docs
- GitHub Check: Test Suite (windows-latest, stable)
- GitHub Check: Check (windows-latest, stable)
- GitHub Check: Check (macOS-latest, stable)
- GitHub Check: Clippy
- GitHub Check: Test Suite (macOS-latest, stable)
🔇 Additional comments (1)
mistralrs-server/Cargo.toml (1)
32-33: LGTM! Dependencies and feature flag are properly configured.The workspace dependencies and the
mcp-serverfeature flag are correctly set up to enable optional MCP server functionality.Also applies to: 48-48
| include_dir = "0.7.4" | ||
| http = "1.3.1" | ||
| hyper = "1.6.0" | ||
| rust-mcp-sdk = { version = "0.4.2", default-features = false, features = ["server", "hyper-server", "2025_03_26"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Consider documenting the purpose of the date-based feature flag.
The feature flag "2025_03_26" appears to be date-based, which could make future maintenance challenging. Consider adding a comment explaining what this feature represents or check if there's a more semantic feature name available.
🌐 Web query:
What does the "2025_03_26" feature flag mean in rust-mcp-sdk version 0.4.2?
💡 Result:
The "2025_03_26" feature flag in the rust-mcp-sdk version 0.4.2 likely corresponds to a specific version of the Model Context Protocol (MCP) schema. In the Rust MCP ecosystem, feature flags are often used to select different versions of the MCP schema, allowing developers to work with various schema versions as needed.
For instance, the rust-mcp-schema crate provides multiple schema versions, including "2024_11_05" and "draft". To use a specific schema version, you can enable the corresponding feature in your Cargo.toml file. While the "2025_03_26" version isn't explicitly listed in the available versions, it's possible that this feature flag is intended to select that specific schema version.
To utilize the "2025_03_26" schema version, you would typically add the following to your Cargo.toml:
[dependencies]
rust-mcp-schema = { version = "0.1.0", features = ["2025_03_26"] }This configuration instructs Cargo to compile the rust-mcp-schema crate with the "2025_03_26" feature enabled, thereby selecting that specific schema version.
It's important to note that the availability of the "2025_03_26" schema version depends on its inclusion in the rust-mcp-schema crate. If this version isn't listed among the available features, it may not be supported or released yet. In such cases, you might need to consult the official MCP documentation or the maintainers of the rust-mcp-schema crate for further information.
For more details on the rust-mcp-schema crate and its available schema versions, you can visit the GitHub repository: (github.com)
Citations:
Document the MCP schema version feature flag
The feature flag "2025_03_26" in rust-mcp-sdk = { version = "0.4.2", … } selects the Model Context Protocol schema v2025-03-26. To improve maintainability:
- Add an inline comment in Cargo.toml explaining that this flag pins the MCP schema to version 2025-03-26.
- Confirm that the
rust-mcp-schemacrate exposes a2025_03_26feature; if it doesn’t, coordinate with its maintainers or choose an available schema version.
🤖 Prompt for AI Agents
In Cargo.toml at line 163, the feature flag "2025_03_26" used in rust-mcp-sdk
dependency selects the MCP schema version 2025-03-26 but lacks documentation.
Add an inline comment next to this feature flag explaining that it pins the
Model Context Protocol schema to version 2025-03-26 for clarity and
maintainability. Also verify that the rust-mcp-schema crate supports this
feature flag; if not, coordinate with its maintainers or select a supported
schema version.
mistralrs-server/src/mcp_server.rs
Outdated
| let req: rust_mcp_sdk::schema::CreateMessageRequest = | ||
| serde_json::from_value(args).map_err(|e| CallToolError::new(io::Error::other(e)))?; | ||
| // Translate to ChatCompletionRequest | ||
| let chat_req: mistralrs_server_core::openai::ChatCompletionRequest = | ||
| serde_json::from_value(serde_json::to_value(req).unwrap()) | ||
| .map_err(CallToolError::new)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize the request conversion to avoid unnecessary serialization.
The current implementation serializes to JSON and then deserializes, which is inefficient. Consider implementing a direct conversion method or using a more efficient mapping approach.
- let args = request.params.arguments.into();
- let req: rust_mcp_sdk::schema::CreateMessageRequest =
- serde_json::from_value(args).map_err(|e| CallToolError::new(io::Error::other(e)))?;
- // Translate to ChatCompletionRequest
- let chat_req: mistralrs_server_core::openai::ChatCompletionRequest =
- serde_json::from_value(serde_json::to_value(req).unwrap())
- .map_err(CallToolError::new)?;
+ let args = request.params.arguments.into();
+ let req: rust_mcp_sdk::schema::CreateMessageRequest =
+ serde_json::from_value(args).map_err(|e| CallToolError::new(io::Error::other(e)))?;
+ // TODO: Implement direct conversion from CreateMessageRequest to ChatCompletionRequest
+ // to avoid the overhead of JSON serialization/deserialization
+ let chat_req: mistralrs_server_core::openai::ChatCompletionRequest =
+ serde_json::from_value(serde_json::to_value(req).unwrap())
+ .map_err(CallToolError::new)?;Would you like me to help implement a direct conversion method between these request types?
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let req: rust_mcp_sdk::schema::CreateMessageRequest = | |
| serde_json::from_value(args).map_err(|e| CallToolError::new(io::Error::other(e)))?; | |
| // Translate to ChatCompletionRequest | |
| let chat_req: mistralrs_server_core::openai::ChatCompletionRequest = | |
| serde_json::from_value(serde_json::to_value(req).unwrap()) | |
| .map_err(CallToolError::new)?; | |
| let args = request.params.arguments.into(); | |
| let req: rust_mcp_sdk::schema::CreateMessageRequest = | |
| serde_json::from_value(args).map_err(|e| CallToolError::new(io::Error::other(e)))?; | |
| // TODO: Implement direct conversion from CreateMessageRequest to ChatCompletionRequest | |
| // to avoid the overhead of JSON serialization/deserialization | |
| let chat_req: mistralrs_server_core::openai::ChatCompletionRequest = | |
| serde_json::from_value(serde_json::to_value(req).unwrap()) | |
| .map_err(CallToolError::new)?; |
🤖 Prompt for AI Agents
In mistralrs-server/src/mcp_server.rs around lines 32 to 37, the code
inefficiently converts CreateMessageRequest to ChatCompletionRequest by
serializing to JSON and then deserializing. To fix this, implement a direct
conversion method such as From or TryFrom trait between these types or manually
map fields from CreateMessageRequest to ChatCompletionRequest without
intermediate JSON serialization.
| match rx.recv().await { | ||
| Some(mistralrs_core::Response::Done(resp)) => { | ||
| let content = resp | ||
| .choices | ||
| .iter() | ||
| .filter_map(|c| c.message.content.clone()) | ||
| .collect::<Vec<_>>() | ||
| .join("\n"); | ||
| Ok(CallToolResult { | ||
| content: vec![CallToolResultContentItem::TextContent(TextContent::new( | ||
| content, None, | ||
| ))], | ||
| is_error: None, | ||
| meta: None, | ||
| }) | ||
| } | ||
| Some(mistralrs_core::Response::ModelError(msg, _)) => { | ||
| Err(CallToolError::new(io::Error::other(msg))) | ||
| } | ||
| Some(_) | None => Err(CallToolError::new(io::Error::other("no response"))), | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🛠️ Refactor suggestion
🧩 Analysis chain
Review the response handling for completeness and structure preservation.
The current implementation has potential issues:
- It only receives one response, which might miss subsequent streaming responses
- Joining multiple choice contents with newlines loses the original response structure
Consider preserving the response structure or documenting the expected behavior.
🏁 Script executed:
#!/bin/bash
# Check if streaming responses are used elsewhere in the codebase
rg -A 5 "Response::Chunk|is_streaming.*true" --type rustLength of output: 10753
Ensure mcp_server handles streaming responses and preserves choice structure
The current match in mistralrs-server/src/mcp_server.rs (lines 46–66) only handles Response::Done and treats any other variant (including streaming chunks) as an error. Since the core and examples use Response::Chunk for streaming, this will break any tool calls configured for streaming:
- Streaming tool responses will hit the
Some(_)arm and return a “no response” error. - Joining all choice contents with
"\n"flattens multiple messages and loses per-choice metadata.
Please update this handler to either:
- Accumulate
Response::Chunkvariants (e.g. buffer incomingchunk.choicesuntil a terminal signal) and then emit a singleCallToolResult, or - Forward each chunk as it arrives (e.g. streaming
CallToolResultContentItem::TextContent), or - Clearly document that mcp_server does not support streaming tool responses and ensure the engine never emits chunks for this endpoint.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (5)
docs/MCP.md (4)
4-5: Grammar: Add comma after introductory phrase. ChangeUnder the hood the server uses [`rust-mcp-sdk`](…)to
Under the hood, the server uses [`rust-mcp-sdk`](…)🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: A comma might be missing here.
Context: ...d of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
11-12: Grammar: Clarify build instruction. Add a comma after “workspace”:- When you only compile the `mistralrs-server` crate outside the workspace enable the `mcp-server` Cargo feature manually: + When you only compile the `mistralrs-server` crate outside the workspace, enable the `mcp-server` Cargo feature manually:🧰 Tools
🪛 LanguageTool
[uncategorized] ~12-~12: A comma might be missing here.
Context: ...hemistralrs-servercrate outside the workspace enable themcp-serverCargo feature m...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
31-35: Specify language for output snippet. The fenced block for the example server output should include a language (e.g.,textorconsole):- ``` + ```text MCP ‑ listening on http://0.0.0.0:9001<details> <summary>🧰 Tools</summary> <details> <summary>🪛 LanguageTool</summary> [uncategorized] ~31-~31: Possible missing comma found. Context: ... need MCP.* The server prints an extra line such as ``` MCP ‑ listening on http://... (AI_HYDRA_LEO_MISSING_COMMA) </details> <details> <summary>🪛 markdownlint-cli2 (0.17.2)</summary> 33-33: Fenced code blocks should have a language specified null (MD040, fenced-code-language) </details> </details> --- `39-40`: **Grammar: Add comma after introductory clause.** ChangeAt start-up the MCP handler advertises…
toAt start-up, the MCP handler advertises…
<details> <summary>🧰 Tools</summary> <details> <summary>🪛 LanguageTool</summary> [uncategorized] ~39-~39: Possible missing comma found. Context: ...bilities announced to clients At start-up the MCP handler advertises the followin... (AI_HYDRA_LEO_MISSING_COMMA) </details> </details> </blockquote></details> <details> <summary>docs/HTTP.md (1)</summary><blockquote> `7-8`: **Grammar: Missing comma after introductory phrase.** Change ```diff - > ℹ️ Besides the HTTP endpoints described below `mistralrs-server` can also expose the same functionality via the **MCP protocol**. + > ℹ️ Besides the HTTP endpoints described below, `mistralrs-server` can also expose the same functionality via the **MCP protocol**.🧰 Tools
🪛 LanguageTool
[uncategorized] ~7-~7: Possible missing comma found.
Context: ...️ Besides the HTTP endpoints described belowmistralrs-servercan also expose the ...(AI_HYDRA_LEO_MISSING_COMMA)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
README.md(2 hunks)docs/HTTP.md(1 hunks)docs/MCP.md(1 hunks)docs/README.md(1 hunks)
✅ Files skipped from review due to trivial changes (2)
- docs/README.md
- README.md
🧰 Additional context used
🪛 LanguageTool
docs/HTTP.md
[uncategorized] ~7-~7: Possible missing comma found.
Context: ...️ Besides the HTTP endpoints described below mistralrs-server can also expose the ...
(AI_HYDRA_LEO_MISSING_COMMA)
docs/MCP.md
[uncategorized] ~5-~5: A comma might be missing here.
Context: ...d of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~12-~12: A comma might be missing here.
Context: ...he mistralrs-server crate outside the workspace enable the mcp-server Cargo feature m...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~31-~31: Possible missing comma found.
Context: ... need MCP.* The server prints an extra line such as ``` MCP ‑ listening on http://...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~39-~39: Possible missing comma found.
Context: ...bilities announced to clients At start-up the MCP handler advertises the followin...
(AI_HYDRA_LEO_MISSING_COMMA)
🪛 markdownlint-cli2 (0.17.2)
docs/MCP.md
33-33: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: Docs
- GitHub Check: Clippy
- GitHub Check: Test Suite (windows-latest, stable)
- GitHub Check: Check (ubuntu-latest, stable)
- GitHub Check: Test Suite (macOS-latest, stable)
- GitHub Check: Test Suite (ubuntu-latest, stable)
- GitHub Check: Check (windows-latest, stable)
- GitHub Check: Check (macOS-latest, stable)
🔇 Additional comments (5)
docs/MCP.md (5)
1-6: Intro is clear and comprehensive. The MCP protocol support doc provides a solid overview of the feature and defines thechattool explicitly.🧰 Tools
🪛 LanguageTool
[uncategorized] ~5-~5: A comma might be missing here.
Context: ...d of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
14-16: Building instructions look good. Clear guidance on feature flags and cargo invocation.
22-27: Running instructions are well explained. Port flags and example command are precise.
37-49: Capabilities section is accurate. The table concisely lists the singlechattool and its purpose.🧰 Tools
🪛 LanguageTool
[uncategorized] ~39-~39: Possible missing comma found.
Context: ...bilities announced to clients At start-up the MCP handler advertises the followin...(AI_HYDRA_LEO_MISSING_COMMA)
58-65: Calling thechattool is documented clearly. The JSON examples match the protocol schema.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
mistralrs-server/src/mcp_server.rs (2)
143-145: 🛠️ Refactor suggestionOptimize the request conversion to avoid unnecessary serialization.
The implementation still serializes to JSON and then deserializes, which is inefficient.
- // Translate to the internal ChatCompletionRequest. - let chat_req: mistralrs_server_core::openai::ChatCompletionRequest = - serde_json::from_value(serde_json::to_value(req).unwrap()) - .map_err(CallToolError::new)?; + // Translate to the internal ChatCompletionRequest. + // TODO: Implement direct conversion from CreateMessageRequest to ChatCompletionRequest + // to avoid the overhead of JSON serialization/deserialization + let chat_req: mistralrs_server_core::openai::ChatCompletionRequest = + serde_json::from_value(serde_json::to_value(req).map_err(CallToolError::new)?) + .map_err(CallToolError::new)?;Also note the
unwrap()could panic - use proper error handling instead.
157-178:⚠️ Potential issueHandle streaming responses properly or document the limitation.
The implementation only handles
Response::Doneand treats streaming chunks as errors.Since the MCP documentation mentions streaming is not yet implemented, consider:
- Adding a comment explaining this limitation
- Providing a more descriptive error message
match rx.recv().await { Some(mistralrs_core::Response::Done(resp)) => { let content = resp .choices .iter() .filter_map(|c| c.message.content.clone()) .collect::<Vec<_>>() .join("\n"); Ok(CallToolResult { content: vec![CallToolResultContentItem::TextContent(TextContent::new( content, None, ))], is_error: None, meta: None, }) } Some(mistralrs_core::Response::ModelError(msg, _)) => { Err(CallToolError::new(io::Error::other(msg))) } - Some(_) | None => Err(CallToolError::new(io::Error::other("no response"))), + Some(mistralrs_core::Response::Chunk(_)) => { + // TODO: Implement streaming response handling + Err(CallToolError::new(io::Error::other("Streaming responses are not yet supported in MCP"))) + } + Some(_) => Err(CallToolError::new(io::Error::other("Unexpected response type"))), + None => Err(CallToolError::new(io::Error::other("No response received from model"))), }
🧹 Nitpick comments (5)
docs/MCP.md (2)
7-7: Add commas for better readability.The static analysis correctly identified places where commas would improve readability.
-Under the hood the server uses [`rust-mcp-sdk`](https://crates.io/crates/rust-mcp-sdk) and exposes a single tool called **`chat`** that mirrors the behaviour of the `/v1/chat/completions` endpoint. +Under the hood, the server uses [`rust-mcp-sdk`](https://crates.io/crates/rust-mcp-sdk) and exposes a single tool called **`chat`** that mirrors the behaviour of the `/v1/chat/completions` endpoint.-When you only compile the `mistralrs-server` crate outside the workspace enable the `mcp-server` Cargo feature manually: +When you only compile the `mistralrs-server` crate outside the workspace, enable the `mcp-server` Cargo feature manually:-At start-up the MCP handler advertises the following `InitializeResult` (abridged): +At start-up, the MCP handler advertises the following `InitializeResult` (abridged):Also applies to: 14-14, 35-35
🧰 Tools
🪛 LanguageTool
[uncategorized] ~7-~7: A comma might be missing here.
Context: ... of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
31-31: Fix the formatting of the note.The asterisk at the beginning appears to be a formatting error.
-* `--mcp-port` takes precedence over `--port` – you can run the HTTP and MCP servers on totally independent ports or omit `--port` when you only need MCP.* +> **Note:** `--mcp-port` takes precedence over `--port` – you can run the HTTP and MCP servers on totally independent ports or omit `--port` when you only need MCP.mistralrs-server/src/mcp_server.rs (3)
58-117: Simplify schema construction using json! macro directly.The current implementation is verbose and uses multiple
unwrap()calls that could panic.pub fn new() -> Self { - // Reuse the argument schema we previously constructed for the `chat` tool. - let required = vec!["messages".to_string()]; - - let mut properties: HashMap<String, Map<String, Value>> = HashMap::new(); - properties.insert( - "messages".to_string(), - json!({ - "type": "array", - "description": "Conversation messages so far", - "items": { - "type": "object", - "required": ["role", "content"], - "properties": { - "role": { "type": "string", "enum": ["user", "assistant", "system"] }, - "content": { "type": "string" } - } - } - }) - .as_object() - .unwrap() - .clone(), - ); - properties.insert( - "maxTokens".to_string(), - json!({ - "type": "integer", - "description": "Maximum number of tokens to generate" - }) - .as_object() - .unwrap() - .clone(), - ); - properties.insert( - "temperature".to_string(), - json!({ - "type": "number", - "description": "Sampling temperature between 0 and 1", - "minimum": 0.0, - "maximum": 1.0 - }) - .as_object() - .unwrap() - .clone(), - ); - properties.insert( - "systemPrompt".to_string(), - json!({ - "type": "string", - "description": "Optional system prompt to prepend to the conversation" - }) - .as_object() - .unwrap() - .clone(), - ); - - let input_schema = ToolInputSchema::new(required, Some(properties)); + let schema_json = json!({ + "required": ["messages"], + "properties": { + "messages": { + "type": "array", + "description": "Conversation messages so far", + "items": { + "type": "object", + "required": ["role", "content"], + "properties": { + "role": { "type": "string", "enum": ["user", "assistant", "system"] }, + "content": { "type": "string" } + } + } + }, + "maxTokens": { + "type": "integer", + "description": "Maximum number of tokens to generate" + }, + "temperature": { + "type": "number", + "description": "Sampling temperature between 0 and 1", + "minimum": 0.0, + "maximum": 1.0 + }, + "systemPrompt": { + "type": "string", + "description": "Optional system prompt to prepend to the conversation" + } + } + }); + + let required = schema_json["required"] + .as_array() + .expect("required should be an array") + .iter() + .map(|v| v.as_str().expect("required items should be strings").to_string()) + .collect(); + + let properties = schema_json["properties"] + .as_object() + .expect("properties should be an object") + .iter() + .map(|(k, v)| (k.clone(), v.as_object().expect("property values should be objects").clone())) + .collect(); + + let input_schema = ToolInputSchema::new(required, Some(properties)); Self { input_schema } }
159-164: Consider preserving the response structure instead of flattening choices.Joining multiple choices with newlines loses the structure and metadata of individual responses.
If multiple choices are expected, consider returning them as separate content items:
- let content = resp - .choices - .iter() - .filter_map(|c| c.message.content.clone()) - .collect::<Vec<_>>() - .join("\n"); - - Ok(CallToolResult { - content: vec![CallToolResultContentItem::TextContent(TextContent::new( - content, None, - ))], - is_error: None, - meta: None, - }) + let content_items: Vec<CallToolResultContentItem> = resp + .choices + .iter() + .filter_map(|c| c.message.content.clone()) + .map(|content| CallToolResultContentItem::TextContent(TextContent::new(content, None))) + .collect(); + + if content_items.is_empty() { + return Err(CallToolError::new(io::Error::other("No content in response"))); + } + + Ok(CallToolResult { + content: content_items, + is_error: None, + meta: None, + })
182-185: Clean up the instruction string formatting.The string has unnecessary leading and trailing newlines.
-const MCP_INSTRUCTIONS: &str = r#" -This server provides LLM text and multimodal model inference. You can use the following tools: -- `chat` for sending a chat completion request with a model message history -"#; +const MCP_INSTRUCTIONS: &str = "This server provides LLM text and multimodal model inference. You can use the following tools:\n- `chat` for sending a chat completion request with a model message history";
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
docs/MCP.md(1 hunks)mistralrs-server/src/main.rs(5 hunks)mistralrs-server/src/mcp_server.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- mistralrs-server/src/main.rs
🧰 Additional context used
🪛 LanguageTool
docs/MCP.md
[uncategorized] ~7-~7: A comma might be missing here.
Context: ... of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...
(AI_EN_LECTOR_MISSING_PUNCTUATION_COMMA)
[uncategorized] ~14-~14: Possible missing comma found.
Context: ...he mistralrs-server crate outside the workspace enable the mcp-server Cargo feature m...
(AI_HYDRA_LEO_MISSING_COMMA)
[uncategorized] ~35-~35: Possible missing comma found.
Context: ...bilities announced to clients At start-up the MCP handler advertises the followin...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: Check (windows-latest, stable)
- GitHub Check: Docs
- GitHub Check: Check (ubuntu-latest, stable)
- GitHub Check: Test Suite (windows-latest, stable)
- GitHub Check: Test Suite (macOS-latest, stable)
- GitHub Check: Clippy
- GitHub Check: Check (macOS-latest, stable)
- GitHub Check: Test Suite (ubuntu-latest, stable)
|
@EricLBuehler I ran into some hiccups but this is a really awesome idea! When following along with the docs (let me know if I missed anything obvious):
mistralrs-mcp2.mp4 |
| use anyhow::Result; | ||
| use clap::Parser; | ||
| use mistralrs_core::{initialize_logging, ModelSelected, TokenSource}; | ||
| use rust_mcp_sdk::schema::LATEST_PROTOCOL_VERSION; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@EricLBuehler for my own curiosity, since there is a feature flag for version "2025_03_26" on the crate dep, will this LATEST_PROTOCOL_VERSION constant be pinned to that or will it be whatever the latest schema / proto version is (and could change in the future)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@matthewhaynesonline we will need to keep the version of rust_mcp_sdk in sync with the latest schema/proto version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
mistralrs-server/src/mcp_server.rs (1)
180-201: Streaming response support is still missing.The response handling only processes
Response::Doneand treats any other variant (includingResponse::Chunkfor streaming) as an error. This will cause streaming requests to fail with "no response" errors.Consider implementing proper streaming support by accumulating chunks or documenting that streaming is not supported for MCP tools.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
mistralrs-server/src/main.rs(5 hunks)mistralrs-server/src/mcp_server.rs(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- mistralrs-server/src/main.rs
⏰ Context from checks skipped due to timeout of 90000ms (9)
- GitHub Check: Clippy
- GitHub Check: Docs
- GitHub Check: Check (macOS-latest, stable)
- GitHub Check: Check (windows-latest, stable)
- GitHub Check: Test Suite (ubuntu-latest, stable)
- GitHub Check: Check (ubuntu-latest, stable)
- GitHub Check: Test Suite (macOS-latest, stable)
- GitHub Check: Test Suite (windows-latest, stable)
- GitHub Check: comment
🔇 Additional comments (6)
mistralrs-server/src/mcp_server.rs (6)
1-30: LGTM! Clean imports and standard JSON-RPC error codes.The import structure is well-organized and the error codes follow the JSON-RPC 2.0 specification properly.
32-57: Well-structured JSON-RPC types.The request/response structures correctly implement the JSON-RPC 2.0 specification with proper optional field handling.
59-80: Excellent trait design for extensible MCP tools.The
McpTooltrait provides a clean, async interface that enables easy addition of new tools while maintaining type safety.
167-168: Request conversion efficiency looks improved.The direct conversion from
serde_json::ValuetoChatCompletionRequestis more efficient than the previous double serialization approach mentioned in past comments.
217-241: Well-designed handler initialization.The handler setup correctly initializes tools and server capabilities following the MCP protocol specification.
330-357: Clean and well-structured server setup.The Axum-based server implementation follows best practices with proper error handling and a clean single-endpoint design appropriate for JSON-RPC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
docs/MCP.md (2)
7-7: Add missing comma after introductory phrase.The sentence reads better and conforms to style guidelines with a comma after "Under the hood".
- Under the hood the server uses [`rust-mcp-sdk`](https://crates.io/crates/rust-mcp-sdk) and exposes tools based on the supported modalities of the loaded model. + Under the hood, the server uses [`rust-mcp-sdk`](https://crates.io/crates/rust-mcp-sdk) and exposes tools based on the supported modalities of the loaded model.🧰 Tools
🪛 LanguageTool
[uncategorized] ~7-~7: Possible missing comma found.
Context: ... of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...(AI_HYDRA_LEO_MISSING_COMMA)
46-46: Specify language for fenced code block.Markdown linting flags this block for missing language identifier; adding
bashimproves readability and tooling support.- ``` + ```bash🧰 Tools
🪛 markdownlint-cli2 (0.17.2)
46-46: Fenced code blocks should have a language specified
null(MD040, fenced-code-language)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/MCP.md(1 hunks)
🧰 Additional context used
🪛 LanguageTool
docs/MCP.md
[uncategorized] ~7-~7: Possible missing comma found.
Context: ... of free-form HTTP routes. Under the hood the server uses [rust-mcp-sdk](https:...
(AI_HYDRA_LEO_MISSING_COMMA)
🪛 markdownlint-cli2 (0.17.2)
docs/MCP.md
13-13: Table column count
Expected: 3; Actual: 4; Too many cells, extra data will be missing
(MD056, table-column-count)
46-46: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms (8)
- GitHub Check: Docs
- GitHub Check: Test Suite (windows-latest, stable)
- GitHub Check: Check (macOS-latest, stable)
- GitHub Check: Clippy
- GitHub Check: Test Suite (ubuntu-latest, stable)
- GitHub Check: Test Suite (macOS-latest, stable)
- GitHub Check: Check (ubuntu-latest, stable)
- GitHub Check: Check (windows-latest, stable)
@matthewhaynesonline I pushed some changes and these should all be fixed. We are using the new |
* Fix handling of Metal fused attn head dims (EricLBuehler#1234) * Fix handling of metal attn head dims * Fix handling of gemma3 1b when images * Tweak default for paged attn builder * Support paged attn for vision model rust api (EricLBuehler#1235) * [Breaking] Support setting HF cache path (EricLBuehler#1237) * Add it internally * Add the apis * Support tool calling for DeepSeek models (EricLBuehler#1239) * Support tool calling for deepseek models * Format * Fix deepseek * Server image processing refactor and fixes (EricLBuehler#1244) * Fix strict gemma3 case * Accept multiple images in the content array * Fix multiple images in one array ct * Add it to the python api * Typos * Optimized CUDA RoPE kernels (EricLBuehler#1247) * Add the kernels * It works * Works * Buulds * Typo fix (add_speial_tokens to add_special_tokens) (EricLBuehler#1246) * Fix typo * Update mistralrs.pyi * Fixes for UQFF + distributed layers (EricLBuehler#1250) * Fixes for uqff + distributed layers * Typo * Automatic agentic search integration (`web_search_options`) (EricLBuehler#1243) * Add the tool * Actually search * Clippy * Sort of works * Remove some debuggers * tweak * Add some rules * Works great * Tweak 'system' prompt * Update mistralrs-core/src/search/mod.rs Co-authored-by: Copilot <[email protected]> * Typo * Add it to all the apis * Add bert model for similarity reranking * Typos * Early detection of tools * Alias max_tokens -> max_completion_tokens too * Customizable bert model * Flip the enabler around * Add docs * Update readme * Typo --------- Co-authored-by: Copilot <[email protected]> * Format kernels (EricLBuehler#1251) * Update readme * Update readme * Remove test * Add quantize guards for uqff deserialize (EricLBuehler#1252) * Refactor cuBLASlt-related code (EricLBuehler#1253) * Centralize cublaslt into mistralrs-quant * Use cublaslt in unquant layer * Use beautiful trait constants for simpler code * Move tests * Dispatch to unquant for cublaslt * Dispatch to unquant for cublaslt * Fix feature * Add convert_to_gptq script * Update deps, bump pyo3 version (EricLBuehler#1259) * Faster cuda FP8 performance (EricLBuehler#1257) * Avoid fp8 sync * Fix dtype * Rust 1.86 clippy (EricLBuehler#1260) * Rust 1.86 clippy * Clippy * Refactor engine arch (EricLBuehler#1262) * Refactor engine add_request * Don't recompile regex * Clippy * Revamped LoRA support - removing the Ordering system! (EricLBuehler#1263) * Play with varbuilder lifetimes * Merge lora weights * Clippy * Lora works * Support multiple loras * Cleanup, remove adapter activation * Complete merge * Fast Metal-specific quantization method: AFQ (EricLBuehler#1264) * Add mlx quantized kernels * Add mlx quantized kernels * Kernel launcher * Add AFQ isq quant and dequant * Some quantmethod things * Begin to implement the qmm caller * Clippy * Much faster * Cache kernels * Docs * Clippy * Add it to uqff * Support prequantized models from MLX (EricLBuehler#1265) * Refactor quantizedconfig * Support AFQ prequantized * Update docs * Update docs * Automatic ISQ to select fastest & most accurate method (EricLBuehler#1266) * Automatic isq * typo * Doc * Improved usage metrics (EricLBuehler#1267) * Fix cuda * Bump tokio from 1.44.1 to 1.44.2 (EricLBuehler#1270) Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.1 to 1.44.2. - [Release notes](https://github.com/tokio-rs/tokio/releases) - [Commits](tokio-rs/tokio@tokio-1.44.1...tokio-1.44.2) --- updated-dependencies: - dependency-name: tokio dependency-version: 1.44.2 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Gather MM ops in mistralrs-quant (EricLBuehler#1272) * Update the caller * Wire things up * Broadcase for afq gathermm * Broadcase for afq gathermm * Clippy * Improve performance of deepseek models * Typo fix * BincountOp not used * Implement Llama 4! (EricLBuehler#1268) * Implement Llama 4 * Implement the main changes for the text model * Make chunked mask * Wire things up * Add some EP * Initial sketch of inputs processor * Runs * Progress * all reduce moes * It works! * Some cleanup * Faster moe block * Add device map * Make chunked matrix * Fully working now! * Reactivate cublaslt * Fix shared mlp cublaslt * Refactor to packed experts * Complete merge * It is a normal model now * Fixes * Set device for moe * ISQ fixes * Much faster sort kernel * Faster loading! * Faster loading! * Fp8 cpu copy ops in candle backend * Add the vision model * Add mmproj layer * Actually merge the inputs * Sketch most of the image processor * Add the rest of the image processor * Implement the whole processor * Add the loader * Some fixes * A batch of fixes * Some fixes * tmp * Actually support isq * Ok it works a bit * Fix norm device * It works * A bit cleaner * Support residul tensors * Remove text loader * Implement the device mapping system * Fix auto device map * Add examples * Add model card * Typo * Remove superflous logging * Fixes for Llama 4 UQFF loading (EricLBuehler#1275) * Support sharding for UQFF (EricLBuehler#1276) * Serialize sharded uqff files * Loading * Fix base64 * Fix bug for group-topk (group_limited_greedy) in deepseek models (EricLBuehler#1278) * Support the DeepCoder model (EricLBuehler#1279) * Add faq for metal not found * Improved PagedAttn scheduling accuracy (EricLBuehler#1282) * Scheduler ops by reference * Ensure scheduler gets correct prompts * Fix cuda build for copy_blocks * Fixes for scheduling image seqs with pagedattn (EricLBuehler#1283) * update to llguidance 0.7.16 (EricLBuehler#1284) * update llguidance to 0.7.16 from crates.io; use ParserFactory * add lark_llg.py example * use new llguidance::Matcher APIs * rework spec-decoding with llg * more work on spec sampling * check for parser stop * fix clippy * remove unneeded rollback * update build_llg_factory to return Result * Update dependencies (EricLBuehler#1286) * Much faster image inputs processing (EricLBuehler#1289) * Add more SDPA head dims for much faster SigLIP (EricLBuehler#1290) * More sdpa head dims, faster vision models * Move nonzero to above for faster metal synch * Doc * Update valid head dims * Show throughput in interactive mode (EricLBuehler#1291) * Update interactive mode throughput stats * Accurate prompt t/s * Accurate prompt t/s for usage * Unify bitwise operations (EricLBuehler#1288) * Unify bitwise ops * Tests pass * Fix cuda build * Clippy * Multimodal prefix caching support! (EricLBuehler#1298) * Initial progress * Support vision prefix caching * Update docs * Add multimodal data abstraction * Interactive mode improvements (EricLBuehler#1299) * More ergonomic image url parsing * Add option to clear * Add the Qwen 3 and Qwen 3 MoE models! (EricLBuehler#1285) * Add qwen3 model * Add enable_thinking * Add initial qwen3 moe * Add the moe model * Format * Fix order of norm * Fix expert shapes * Fix reverse * Fix norm device for isq * Fix nonzero when no nonzero * Moe model runs * Working qwen3 moe * Add metal fp8 blockwise dequant * Clean * Typo * Enable tool calling * Streamlined ux * Add some examples * Add docs * Fix dead link * Remove interactive mode max_len * Update QWEN3.md * Hotfix for vision mode clear * Revamped and streaming web search support (EricLBuehler#1301) * Streaming web search * Refactor a bit * More refactoring * Add some logging, parallelize some things * Allow url * Suppress warning, allow multi-turn searching * Batch compute_similarities * Cap content len * Typos * Doc * Handle vision messages or different tool call prefixes (EricLBuehler#1302) * Fix cuda * Tune web search budget * Simplify prefix cacher (EricLBuehler#1305) * Use rustyline to handle non-ascii in interactive mode (EricLBuehler#1306) The io::stdin().read_line() cannot handle non-ascii input, which caused crash when use backspace to delete non-ascii characters. Introduce rustyline to the interactive mode to solve the problem. Plus it can bring more editing features in the future. Close EricLBuehler#1140 * Add more tools for automatic search (EricLBuehler#1307) * Add interactive mode history * Add a website extraction tool * Pass toks by reference * Optimize prompt chunking * Fix CPU hogging in interactive mode (EricLBuehler#1309) The log enabler should be checked after the sleep instead of a busy loop checking. Since the interactive mode always disables the token speed logger, 100% CPU was taken by this loop always. * Add Metal precompilation support (EricLBuehler#1311) * Add metal precompilation for paged attn * Add for mistralrs-quant * Better constructor * Dont always build * Fix name for paged attn rebuild * Reduce thrashing of Metal autorelease (EricLBuehler#1313) * Reduce calls to autorelease * Optimize clone_in_cache * Refactor float8 * make `AdapterPaths` and `LoraAdapterPaths` public (EricLBuehler#1314) Make `AdapterPaths` and `LoraAdapterPaths` public so `LocalModelPaths` can be constructed outside of `mistralrs-core`. * Refactor KV cache manager (EricLBuehler#1315) * Refactor kv cache * Refactor caches * Fix some overflows * Add `Audio` and `Speech` model categories (EricLBuehler#1317) * add `Audio` to `ModelCategory` * add `Speech` to `ModelCategory` * fix to go back to PartialEq having an exhaustiveness check * Remove has_conv2d from vision model API (EricLBuehler#1318) * Unified/automatic flash attention enabler (EricLBuehler#1319) * Remove from sdpa params * Fix errors * No warnings * Log * Clippy * Fix cublaslt 4d mask (EricLBuehler#1320) * Fix cublaslt 4d mask * Clippy * Keep caches on gpu * Qwen VL models fixes (EricLBuehler#1322) * Add some defaults * Fix * Fix one thing * 2.5 vl works * Use caching again * Fix v2 * Move index inside loop * Offset in ropeidx * Default support for vision prefix caching is false * Fixes for all vision models (EricLBuehler#1323) * Fix phi input processor? * Fix phi input processor * Handle no_prefix_cache from pipeline * Phi models confirmed 👍 * Fixed for phi inputs processors * Fixed for phi4 * Llama 3 confirmed 😀 * Mistral 3 confirmed 😃 * Idefics 2/3 fixes * Some fixes * Remove unsafety * Improved+faster LRU prefix cacher (EricLBuehler#1321) * Show TTFT * Use LRU prefix cacher * Faster prefix cacher * Inplace ISQ support and default to mmap (EricLBuehler#1277) * Initial impl of immediate isq * Immediate isq -> !loading_isq * Varbuiler utils always using mmap! * Log * Add for packed experts * Afq without copy * Clarify * Clippy * Apple immediate isq * Better logic for loading_isq * Support showing ttft * Rename * Shared quantize guard * Parallel progress bar * Parallel loading for progress bars * Actual ISQ support * Conditional parallelism for NiceProgressBar * Use conditional iterator * Warn once * Predicate for applying immediate isq * Allow parallel * Remove debug print * Remove debug print * Remove debug print * Fix typos (EricLBuehler#1329) * Fix Idefics 3 arch chat templating (EricLBuehler#1330) * Update inputs merger * Fix * Better warning * Better warning * Better warning * Nonzero ahead of time * No f32 * Clippy * Optimize get_logprobs * Fix packed experts * Update masking * Use Sdpa in idefics3 * QuantMethod in idefics3 vision * Remove a .contiguous * Remove two space from PR comment (EricLBuehler#1331) * Add automatic vision loader type (EricLBuehler#1332) * Add automatic vision loader * Remove references to --arch * Update examples * Add the Dia 1.6b TTS model! (EricLBuehler#1304) * Add loading * Add rope, mlp, most of attn * Add encoder + encoder layer, decoder layer forwards * Add decoder forwards * Add prepare_audio_prompt * prepare_generation mostly done * Add a proper dia kvcache * Add most of decoder_step * Add the sampler * Add the generation loop * Wire things up * Add speech pipeline * Fixes * Loads * Some fixes * f32 * Some progress * Ok it runs upto dac decoding * Add dac part loading * Loads and runs at least * Remove encodec * Debugging * Debugging * Huh * Complete merge * Interactive * Confirmed dac works at least * Looks like encoder works * Much progress * Hmm * Sampling * Almost there * Sampler * Sampler * Bf16 support * Response * Use it in interactive mode * Fix oneshot * Add openai api * Add openai api * Refactor loading * Use naive sdpa for inplace * Factor out * Clippy * Clippy * Config * Refactor config * Metal clippy * Fix t/s * ISQ support * Some fixes, nits * Fix cuda * Clippy * Inhibit cublaslt for cuda * Add server example * Add python example * Add rust api * Add docs * Update config.toml * Fix .pyi * Update readme * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * config.toml tweak * update `llguidance` to `0.7.20` (EricLBuehler#1334) Update `llguidance` from `0.7.16` to `0.7.20` so that it has guidance-ai/llguidance#172 which is a fix for building on GCC 15. * Add model category <> messages check (EricLBuehler#1335) * Verify model category matches the messages * Add vision chat * Fixes * Add element-wise normalization check (EricLBuehler#1340) * Fix streaming example print statement (EricLBuehler#1339) * Fix normalization formula in comment (EricLBuehler#1338) * Fix image_to_pixels to handle non-RGB images (EricLBuehler#1337) * Fix typo in expect messages (EricLBuehler#1342) * Don't use mmap on cuda (EricLBuehler#1336) * No mmap on cuda * Simplify streaming tool call logic * Remove debug * Support AWQ format models (EricLBuehler#1350) * Support AWQ format models * Clippy fix * Fix uqff dummy layer ISQ application (EricLBuehler#1351) * Disable immediate isq if write_uqff (EricLBuehler#1352) * Fixes for UQFF loading on CUDA, ISQ pack factor (EricLBuehler#1354) * Fix logic for uqff on cuda * Updated pack_factor * Refactor Option references for model paths (EricLBuehler#1347) * refactor: use Option refs in model path helpers * Format * Add a script for server benchmarking (EricLBuehler#1355) * Serde alias * Fix * Update for tie_word_embeddings * Print running/waiting * 30 users * Update num_users * Update dummy paged attn * Optimized Metal qmv_fast path (EricLBuehler#1356) * Compile with lto * Tweak profiles * New, fast sampler for Metal! (EricLBuehler#1327) * Show TTFT * Use LRU prefix cacher * Faster prefix cacher * A bit of gpu sampling * Minp but cpu for now * Metal fast cumsum impl * Sampling with fast topp kernel * Hmm not perfect * Add metal sort kernels * Tmp * Add single block sort * Add most of multi block sort, just need copy op * Add copy kernels * Expose kernels * Add a test * Ok it works * Structure things * Add caching * Rename * Cpu is default * CUDA case * Topk * Refactor Option references for model paths (EricLBuehler#1347) * refactor: use Option refs in model path helpers * Format * Add a script for server benchmarking (EricLBuehler#1355) * Serde alias * Fix * Update for tie_word_embeddings * Print running/waiting * 30 users * Update num_users * Update dummy paged attn * Optimized Metal qmv_fast path (EricLBuehler#1356) * Compile with lto * Tweak profiles * Fix topk * Penalties * Add logits processor, clippy fixes * Fix chat port * Remove warning * Fix chat port * Fix metal parallel sampling (EricLBuehler#1357) * Cpu if parallel for now * Tweak bench script * Add immediate isq predicates for qwen3 (EricLBuehler#1358) * Add immediate isq predicates for qwen3 * Fix parsing of "parse_isq_value" depedent of device * Typo * Fix gemma3 logging * Regressions fixes (EricLBuehler#1359) * Fix regression for mmap * Revert EricLBuehler#1321 * Refactored matching_cache impl * Clippy * Revamped and smaller readme (EricLBuehler#1360) * Expandable detail sections * Refactor using derivative model * Tweak quick examples * Update llama * Update llama * Supported accelerators is a table * Update installation guides * Tweak apis * Remove --port in quick examples * Add demo gif * Add gif in readme * Update demo gif * Update demo gif * Update demo gif * Add gif in readme * Add gif in readme * Add a web chat app! (EricLBuehler#1362) * Initial * Markdown * Copy code * Add model loading sidebar * Support vision models * Tweak isq * Links go to another page * Clear when switch model * Fix html tags * Add image support! * More then one images * Fix * Improved textarea * Tab for switching between vision and text * No paged attn for now * Prettier format * Multiple models at once * Better switching, clearing ability * Mobile support * Inline markdown parser * Update examples * Typos * Support specifying isq * Fix mobile * Fixes * Fix button on mobile * Image height is capped * Thumbnail * Fix rotating kv cache edge case * Add drag and drop for images * Small things * Sidebar is frozen now * Better listner * Add readme * Tweak readme * Add chat history support to web chat app (EricLBuehler#1363) * Add chat history * Support renaming * Start immediately with new chat * Add timestamp * Prettier chat list * Style * Delete chat * Fix copy button * Fix markdown rendering * Store things in cache * Store things in cache * Refactor web chat, fix multichat image restore (EricLBuehler#1364) * Fix multichat image restoration. * Clippy * Refactor * Refactor frontent * Fix repeated immediate isq init (EricLBuehler#1365) * Add images_ref * Add debug impl * Fix the bug * Tweak style of buttons * Add a spinner * Move spinner * Tweak emoji * Add gif * Tweak initial gif * Include vision tower tensors in Mistral3 UQFF (EricLBuehler#1366) * Fix mistral 3 uqff resitdual tensors for vision * Rolling shard creation for uqff files (EricLBuehler#1367) * Fix occasional unstability during isq of afq (EricLBuehler#1368) * Fix unstability during isq of afq * Clippy * Fix web chat installation * Support web chat file uploading (EricLBuehler#1370) * Web chat fixes * Fix thumbnail in message, reuse blank chat * Add file uploading support * Fix scroll * Allowed extensions * Preserve files as literals * Support multiple clients * Add a stop button * New cache dir * New cache dir * Fix * Refactor * Update readme * Tweak drag-and-drop css * Add speech generation support to the web chat! (EricLBuehler#1373) * Initial speech gen support for web chat * Tweak ui * Update docs * Prefix caching for PagedAttention! (EricLBuehler#1369) * Exposing some things for logical token blocks * Prefix cache manager has the scheduler * Refactor * Get logical and physical blocks into the prefix cacher * Hash and cache * Pass physical block prefill * Allocation of prefilled block tables * Temp * Dont always use 2 * Hmm * Hmm * It mostly works * Increment refcount * Support images! * Add to dummy paged attn * Fix some clippy * Clippy * More checks * Include EricLBuehler#1371, closes EricLBuehler#1371 * Typos * Update docs * Metal PagedAttention accuracy improvements (EricLBuehler#1374) * Fix subtle bug * Fix half sum bug * Format metal paged attention * Handle images in paged attn scheduler (EricLBuehler#1375) * Include schemas needed for chatcompletions endpoint (EricLBuehler#1353) * EricLBuehler#1326: WIP include schemas needed for chat completions endpoint Conflicts: Cargo.lock mistralrs-server/src/main.rs * EricLBuehler#1326: WIP define utoipa as a workspace dep since core and server both need it * EricLBuehler#1326: first draft of handling schemas that use Either * EricLBuehler#1326: first draft of handling schema for Grammar * EricLBuehler#1326: Add in other endpoints to API docs. * EricLBuehler#1326: Adjust code comments * EricLBuehler#1326: Implement coderabbitai suggestions - EricLBuehler#1353 (review) - EricLBuehler#1353 (comment) * Fix constraints with metal sampler * Revert EricLBuehler#1375 * Fix case where prefix cacher returns no toks (EricLBuehler#1377) * Fix AFQ UQFF serialization * Faster UQFF serialization (EricLBuehler#1379) * Faster UQFF serialization * Fix uqff gemma3 * Improve gemma3 auto loader names * UQFF creation for AFQ on CPU support (EricLBuehler#1380) * Add afq cpu quantize/dequantize * Clippy * Improved device for afq quantize * Improved dtype handling for cpu afq (de)quantize * Improved generate_uqff_card * Add fused CPU attention kernel! (EricLBuehler#1382) * Working * Fix warnings * Allow mask * Support bf16, f16 * Handle striding * Parallelized * Add initial vector flash attn * Avoid repeated allocations * Tiled kv * Apply some clippy * Some small fixes * Chunked vec_dot * Clipy * Use T::zero * Refactor attention backends (EricLBuehler#1384) * Refactor attention code * Refactor attention code * Move into backends * Set macOS thread affinity for CPU attn (EricLBuehler#1385) * Use lazylock * Format * Fix metal warn build * Faster Qwen 3 MoE support on Metal (EricLBuehler#1387) * Fix load * Use afq gather qmm * Well it runs * It works * Polish * Fast and slow options * Remove quantized.rs * Polish some more * Refactor * Add isq * Update load in parallel * Support fp8 * Refactor for FusedExperts * Clippy * Handle pack factor when loading prequantized models * Use f32 only in moe * Avoid using f32 so much * Avoid using f32 so much * Fix PagedAttention block leaks (EricLBuehler#1388) * Warn and ignore if ignored * Fix a block allocation leak * Update bench.py * Fix double free in block engine * Do not apply ISQ if loading a prequantized model * Fix cuda build again (EricLBuehler#1389) * Fix cuda build * Fix * Format * Fixes for cuda docker * Update dockerfiles * Bump version to 0.6.0 (EricLBuehler#1390) * Bump version to 0.6.0 * Remove lower_level api * Make a static dir * Update deps * Fix routing for static handler in web chat * Fewer .contiguous calls for qwen3 moe (EricLBuehler#1391) * Allow speech models to accept batched inputs (EricLBuehler#1393) * Allow speech models to accept batched inputs * Clippy * Ring distributed backend for heterogeneous TP (EricLBuehler#1238) * Begin work on ring distributed backend for Metal * Add the actual ring functionality * It loads and kind of runs * It works * Optimize buffer allocation * Avoid copy * It works * Add allgather * Fix load * Ping-pong * Small things * Add config json * Allow different ip address * Read config once * Read config when appropriate * Replicate requests * Small fix * Fix small compat with openai * Clippy * Update docs * Add deepseek tool calling chat template * Add auto loader for vision/text detection! (EricLBuehler#1402) * Add auto loader for vision/text detection * Build fixes * Add model loader * Update docs * Format * Create Mistral.rs Server Core Lib: `mistralrs-server-core` (EricLBuehler#1346) * First draft of exposing mistral server routes as lib * make arg struct fields pub * Take base path so utoipa swagger route can properly redirect * Expose swagger routes and make it configurable * Add base path option for swagger docs * More work on modularizing mistralrs server * Sync fork (+1 squashed commit) Squashed commits: [169ae9e] Sync fork * Adjust fn params to use refs / individual params instead of args * Start breaking down controller actions into smaller pieces * Continue refactoring * Make mods pub so they can be used outside crate * Allow chat completion streamer to take a callback so that you can get the complete response when finished WIP (+3 squashed commits) Squashed commits: [0061d87] WIP [c484d56] WIP [16f8a60] WIP * Sync fork * Adjust callback type * Remove throughput_log arg that was removed in 26afcc3 * Implement defaults for Args (and use for Clap) * Small code formatting tweaks * Rename callback to match SSE event and code clean up * Sync fork * WIP: first very rough draft of server core builder. Doesn't meet parity with old functional approach yet (slower / unstable?). * Clean up (+4 squashed commits) Squashed commits: [e1cff387] Sync fork [d8301025] WIP debugging [1ea9f8c8] Sync fork [4fe28cf5] WIP: debug function * WIP server core builders * Code clean up * Add on_chunk callback * Code clean up * First draft of creating version of mistral-server that uses server-core Code clean up (+1 squashed commit) Squashed commits: [adea1693] * Sync fork * Add helper methods to builder to make optional args more ergonomic (since .build validates params) * Start adding docs * Start cleaning up crates deps * Example commit of mistral-server with implementing server-core * Start addressing CodeRabbit feedback * Fix comment typo * Tweak doc blocks * - Update type alias naming for clarity (MistralRs instead of Mistral) - CodeRabbit, don't use eprintln for lib (use trace) - Allow buffer size to be passed in and default to Constant - Allow router body limit to be passed in and default to Constant - Update doc examples * Typo * Address CoderRabbitAI feedback * Support linear rope for llama3 (EricLBuehler#1408) * Hotfix for loading * Fix vllama4 uqff loading (EricLBuehler#1409) * Fix vllama4 uqff loading * Fix regex * Fix regex * Maybe a fix * Gracefully handle receiver disconnects (EricLBuehler#1410) * Handle receiver disconnects * Format * Fix Qwen3 MoE device mapping irregularities (EricLBuehler#1411) * Fix bias * Fix lm_head packing case * Account for gate * Fix head dim * Fix interactive mode URL parsing (EricLBuehler#1412) * fix url regex in vision interactive mode * Fix regex * Clippy * Refactor auto device map (EricLBuehler#1413) * Refactor auto device map * Refactor a bit more * Clippy * Enable runtime sampling tweaks in interactive mode (EricLBuehler#1414) * Document runtime sampling commands * Fix readme * Tweak * Bounds checking * Tweak temp bounds * Send streaming tokens every time * Gumbel sampling for fast sampler (EricLBuehler#1416) * Improved handling for initialize_logging * Improved CPU flash attention accuracy & performance (EricLBuehler#1417) * Downcast correctly * Operate internally in f32 * Avoid some casts and striding * Prefetch * Provide chat_templates to container users (EricLBuehler#1419) Models often come without chat templates requiring mapping them from the source repository into a container for access by the mistralrs-server. Copy the templates from the build tree into the root of the image to permit use via `--chat-template /chat_templates/something.json` TODO: With the increase in quantized models and support for other formats, the initial benchmark run during model load can be used to qualify/select existing chat templates embedded into the binary for models which do not come with any (to include output of the functional failures in each test allowing users to modify the ones already provided correctly to suit the model being loaded). Co-authored-by: RageLtMan <rageltman [at] sempervictus> * Faster cpu flash attn (EricLBuehler#1418) * Faster cpu flash attn * Prefetch * Clippy * Add some tests * Add softcap tests * Fix test_parse_image_url test * Update tests * Update tests * Web search improvements (bm25, web chat) (EricLBuehler#1420) * Fix web search blocking case * Web search support in web chat * Tweak ui * Support fallback to bm25 * Clippy * Reinject descriptions * Propely handle consecutive searches (EricLBuehler#1421) * Update extraction tool reinjection * Looped * Update docs (EricLBuehler#1422) - lib.rs: clean up example var names and match logging change from EricLBuehler@201d6be - server_builder: fix typo - READMEs: link to crate docs * Better tool call detection logic (EricLBuehler#1424) * Add web search hook callbacks (EricLBuehler#1426) * feat: add customizable search hook * Move to builder * Update docs * Fix CUDA context switching, bind thread on CudaStorage drop (EricLBuehler#1428) * Add CUDA context helper and use in Llama forward * No flashparams? * working * Tweak * Update to use dep * conditionally build flash attention inputs (EricLBuehler#1429) * Add AGENTS.md (EricLBuehler#1430) * Support Qwen3 GGUF model (EricLBuehler#1432) * Support QWen3 GGUF model * Clippy fix * cargo fmt * Improved paged attn prefix caching (EricLBuehler#1434) * Improved paged attn prefix caching * Disable * Clippy * Temporary fix for qwen3 gguf tokenizer (EricLBuehler#1433) * Temporary fix for qwen3 gguf tokenizer * Typo fix * Add tool callback support (EricLBuehler#1427) * Add tool callback support * Fixes * Support named tool callbacks * Update examples * Update docs * Clippy * Centralize crate dependencies (EricLBuehler#1438) * chore: centralize dependencies * Format * Fix bug in tokenizer created with gguf metadata (EricLBuehler#1440) * Fix bug in tokenizer created with gguf metadata * Clippy fix * Update deps (EricLBuehler#1441) * Small things * Update deps * Update deps * Update breaking changes * Doc fixes (EricLBuehler#1442) * Mention uqff_maker * Downgrade rustyline 16.0.0 -> 15.0.0 (EricLBuehler#1444) * Add max_completion_tokens alias for server (EricLBuehler#1451) * Audio input support (Phi 4 multimodal) (EricLBuehler#1448) * Deps * Add conformer * Nemo loading * Position embeds * Load t5 attn bias * Attn and feed forward * Add conv module and glu pointwise * Implement relative attn bias * Add the forward methods * Add encoder embedding * Fix oproj * Some loading * Conformer loads! * Fully loading speech stack * Merger * Dont need that * First pass at audio processing * Read samples * Optional * Small loading fix * Runs but not correct yet * Improved audio processing? * Works with this * Fix t5 attn bias * It works! * Comment * Use some other crates * Clippy * Allow bf16 on metal * Add prefix_audio * Remove unused * Typo * User specified * Add audio url parsing * AudioProjectionMode -> InputMode * Audio prefix caching * Fix bug in audio prefix caching * Support both at the same time! * Tweak logging * Support stereo * Add mistralrs-audio * Support batching * Add server and rust api example * Add python api * Fix add_multimodal_message * Fix unfold for conformer * Streaming example * Add web chat support * Add modalities registry * Fix offline cache issue for gguf models (EricLBuehler#1452) * Add MCP server endpoints (EricLBuehler#1453) * feat(server): add MCP server support * Add mcp docs * Add handle_list_tools_request * Better launch, tool handling * Tmp state * Ok works * Handle modalities * Update docs * Add ping * Tweak temperature bounds, args * MCP documentation pass (EricLBuehler#1455) * Fix table * Update mcp docs * Improve readme header * Improve readme header * Integrate an MCP client (EricLBuehler#1456) * Add builtin mcp client * Use async loader * Add headers * Handle sse * More flexible search request * Add tool callbacks with tools, for mcp * Add bearer token support * Add websocket support * Update docs * Add python api * Clippy * Add http api, docs * Tests pass * Make these configs actually work * Add docs * Make mistralrs-mcp * Refactor examples * Update examples * Add defaults * Add defaults * Add defaults * Update docs * Improved docs * Add -y to npx usages * Even better examples * Update generate_wheels * Update generate_wheels * Update generate_wheels * Fix Dockerfile.cuda-all * Improve automatic tool call (EricLBuehler#1460) * Improved auto tool call * Add logging * chore: `Dockerfile.cuda-all` configurable threads (EricLBuehler#1458) * chore: `Dockerfile.cuda-all` - Merge `RUN` for `apt-get install` (EricLBuehler#1459) * Add fallback definition for isnan (EricLBuehler#1463) * chore: `Dockerfile` - Drop runtime rayon thread ENV (EricLBuehler#1465) * chore: Dockerfile - Remove rayon threads env * chore: Dockerfile - Improve formatting for `apt-get` * Remove duplicate calls for api_dir_list (EricLBuehler#1474) * Remove duplicate calls for api_dir_list * Support local cache for api_dir_list * Fix home folder for metal * Capitalized * Fix transient pyo3 dep (EricLBuehler#1478) Co-authored-by: Eric Buehler <[email protected]> * Fix objc dep with non macos (EricLBuehler#1480) * Fix phi 3/4 + nccl issue (EricLBuehler#1481) * Fix log * Fix n kv heads * Fix phi3.5 moe (EricLBuehler#1482) * Fix phi3.5 moe accum device * Fix again * Fix again * Support GLM4 model! (EricLBuehler#1437) * Support GLM4 model * Mention GLM4 model in ReadMe * glm4 type hint * Typo fix * Fix unsupported chat_template function * Clippy fix * Refactor distributed backend (EricLBuehler#1484) * Refactor distributed backend, check power of 2 * Fix compilation * Cap metal paged attn kv allocation (EricLBuehler#1485) * Better paged attn metal cap (EricLBuehler#1486) * Better paged attn metal cap * Small fix * Comment * Small fix * Refactor * Server core: consolidate and unify route handlers and API surface (EricLBuehler#1423) * Start working on consolidating completion and chat_completion underlying implementations * Move response channel to util mod for now (since it's used with streaming and non streaming) * More work on consolidating completions and chat completions * More WIP consolidation of server core handlers * More WIP consolidation of server core handlers * More WIP consolidation of server core handlers * Update docs and restrict completion core visibility * CodeRabbit feedback: remove logprobs warn from route handler since parse request also checks this * Use consistent var name for completions mod * Make route handler modules public API consistent (same fn names, etc.) and provide proxy fn that wrap core fns so core mod doesn't have to be pub Make lib.rs example compile checked and update example * Code formatting * Typo * Sync fork * Sync fork * Docs example fix * Support qwen3 gguf (EricLBuehler#1488) * Add qwen3 gguf * Template fixup * Make bos/eos token IDs optional (EricLBuehler#1493) * Remove python deps from CUDA dockerfiles (EricLBuehler#1487) * Handle noncontiguous v in naive_sdpa (EricLBuehler#1499) Co-authored-by: Eric Buehler <[email protected]> * Server Core: refactor Paged Attention configuration (EricLBuehler#1500) * Use StorageModePrivate for Metal PA kv cache (EricLBuehler#1506) * Fix OpenAI stream: emit field in tool-call deltas for schema compliance (EricLBuehler#1507) * FP8 KV-cache quantization for PagedAttention (EricLBuehler#1400) * Add most of paged attn kv quant * It builds a bit * All the functionality at least * Small fix * Add a scale * Fix bf16 usage * Make k_v_scale optional * Collector * Tweak collection * Refactor * Add to apis * Add cuda impl * Fix compilation * Fixes * Handle ENABLE_FP8 * Format * Tweak * Fix scaled_convert usage * Fix cache_t size * Fixed scale collection * Actual fix * Fix fp8 for CC<8 * Fix the usual String != &str bit (EricLBuehler#1483) Co-authored-by: RageLtMan <rageltman [at] sempervictus> * chore: `Dockerfile` - Drop runtime rayon thread ENV (EricLBuehler#1465) * chore: Dockerfile - Remove rayon threads env * chore: Dockerfile - Improve formatting for `apt-get` * Remove duplicate calls for api_dir_list (EricLBuehler#1474) * Remove duplicate calls for api_dir_list * Support local cache for api_dir_list * Fix home folder for metal * Capitalized * Fix transient pyo3 dep (EricLBuehler#1478) Co-authored-by: Eric Buehler <[email protected]> * Fix objc dep with non macos (EricLBuehler#1480) * Fix phi 3/4 + nccl issue (EricLBuehler#1481) * Fix log * Fix n kv heads * Fix phi3.5 moe (EricLBuehler#1482) * Fix phi3.5 moe accum device * Fix again * Fix again * Support GLM4 model! (EricLBuehler#1437) * Support GLM4 model * Mention GLM4 model in ReadMe * glm4 type hint * Typo fix * Fix unsupported chat_template function * Clippy fix * Refactor distributed backend (EricLBuehler#1484) * Refactor distributed backend, check power of 2 * Fix compilation * Cap metal paged attn kv allocation (EricLBuehler#1485) * Better paged attn metal cap (EricLBuehler#1486) * Better paged attn metal cap * Small fix * Comment * Small fix * Refactor * Server core: consolidate and unify route handlers and API surface (EricLBuehler#1423) * Start working on consolidating completion and chat_completion underlying implementations * Move response channel to util mod for now (since it's used with streaming and non streaming) * More work on consolidating completions and chat completions * More WIP consolidation of server core handlers * More WIP consolidation of server core handlers * More WIP consolidation of server core handlers * Update docs and restrict completion core visibility * CodeRabbit feedback: remove logprobs warn from route handler since parse request also checks this * Use consistent var name for completions mod * Make route handler modules public API consistent (same fn names, etc.) and provide proxy fn that wrap core fns so core mod doesn't have to be pub Make lib.rs example compile checked and update example * Code formatting * Typo * Sync fork * Sync fork * Docs example fix * Support qwen3 gguf (EricLBuehler#1488) * Add qwen3 gguf * Template fixup * Make bos/eos token IDs optional (EricLBuehler#1493) * Remove python deps from CUDA dockerfiles (EricLBuehler#1487) * Handle USE_FP8 for cuda * Fix cuda warn * Add readme * Saturating sub in sequence state --------- Co-authored-by: Eric Buehler <[email protected]> Co-authored-by: RageLtMan <[email protected]> Co-authored-by: Brennan Kinney <[email protected]> Co-authored-by: Guoqing Bao <[email protected]> Co-authored-by: Matthew Haynes <[email protected]> * Validate model name in OpenAI API (EricLBuehler#1509) * Validate model name in openai api * Add docs, allow 'ignore' * Updated examples for EricLBuehler#1509 * Fix mcp import in doc string (EricLBuehler#1510) * Add multi-model support! (EricLBuehler#1512) * Refactor MistralRs * Working multi-model! * Add mutli-model docs initially * Update mistralrs-pyo3, mistralrs-bench, mistralrs * Update apis for consistency * API tweaks * Logging tweaks * Add examples, tweak cli * Clearer pipeline id * Fix config key semantics * Format and clippy * Tweak logging, fix example * Clippy refactor * Update examples * Remove unused multi model docs * Replace 'ignore' with 'default' * Update docs * Add stars label to readme (EricLBuehler#1513) * Add CLAUDE.md * Handle base_model.model case in lora (EricLBuehler#1514) * Add thread_local! for engine-specific const/static (EricLBuehler#1517) * Fix MCP doc test (EricLBuehler#1511) * Allow disabling metal precompilation (EricLBuehler#1518) * Allow disabling metal precompilation * Simple preprocessor * Simple docs --------- Co-authored-by: Eric Buehler <[email protected]> * Rust 1.88 clippy (EricLBuehler#1522) * Rust 1.88 clippy * Format * Fix cuda warnings (EricLBuehler#1526) * Avoid panic decoding tokens on error (EricLBuehler#1527) * Split Marlin and Paged Attention kernels for faster build (EricLBuehler#1525) * Split Marlin and Paged Attention kernels for faster build * Typo fix * chore: update llguidance (EricLBuehler#1535) * chore: update llguidance * chore: remove unused import * Add the SmolLM3 model! (EricLBuehler#1501) * Add model * Update loader * Fix llama config usage * Docs * Fix config no_rope_layers * Fix tie_word_embeddings default * Add chat template * Embed the chat templates * Fix embedding template * enable_thinking default true * Update examples * XML tools for smollm3 * Add smollm3 docs * Fix openai examples * Clippy --------- Co-authored-by: Eric Buehler <[email protected]> * Add full Gemma 3n support! (EricLBuehler#1519) * Add initial * Loading for text model * Add ple embeddings * Add altup, laurel block * Update rmsnorm * Add mlp * Update attn norm application * Currently no kv shared * Wire it up * It runs * Fix bf16 * Fix scaled embd * Fixes for mean * tmp * Attn confirmed * Fix target_magnitude * Add shared kv * Ok it works * Remove npy * Fix streaming * Remove warnings * Remove paged attn * Refactor rope * Add immediate isq * Add vision & mproj * Update image processor * Vision merge runs, not correct * Remove * Add mobilenet v5 * Add multimodal vision embedding * Fix load * runs * Fix gamma * Works but just not vision tower * It works!! * Tweak * Fix warnings * Move vision tower * Fix warn * Update cache manager things * Refactor * Add audio model, it loads * Add audio processing * It runs at least * tmp * A bit better * Audio works!!!! * Fused attn in vision * Clippy * Update audio runner * Optimized audio model * Remove unused things * Fix inputs processor bug * Remove comments * Clippy * Small optimizations * Format * Correctly register modalities * Add docs * Update readme * Runs there * Fixed padding from Blaizzy/mlx-vlm#410 * Add better checks * Fix sdpa n_kv_groups * Vision encoder works! * Rotate image * Clippy * Fix cuda loading * Updated device mapper * Fix overflow * Fix dtype errors * Refactor image/audio embeddings * Fix metal * Fix dtype mismatch * Audio processing fixes * Audio processing fixes * Works * Audio is good * Fix boi/eoi too * Embed the chat templates * Better embedding accuracy in non f32 * More f32 * Support bf16 on metal * Add more ISQ * Fixed device map * Clippy * Gemma3n no paged attn * Fix saturating sub * Faster rmsnorm * Use sdpa for vision model * Fix ple bug * Fix name * Fix multiaudio * Add matformer config loading * Add docs * Add support for matformer in auto device mapper * Update docs * Typos * Tweak * Tweak * Fix multidevice * Fix gemma3n text model auto device map * Fix dims3 * Fix auto devic emap vision * Non-metal keeps PLE on cpu * Complete merge * Vision dtype f16 -> f32 * Fix metal nm device * Fix uqff * Typos * Reference uqff * Fix tests * Fix sequence length check (EricLBuehler#1546) * update candle version (EricLBuehler#1545) Co-authored-by: AlpineVibrations <[email protected]> * add ios target to metal deps (EricLBuehler#1548) --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Eric Buehler <[email protected]> Co-authored-by: Eric Buehler <[email protected]> Co-authored-by: edwko <[email protected]> Co-authored-by: Copilot <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Guoqing Bao <[email protected]> Co-authored-by: Michał Moskal <[email protected]> Co-authored-by: Chen Mulong <[email protected]> Co-authored-by: Steph Wolski <[email protected]> Co-authored-by: omahs <[email protected]> Co-authored-by: Viktor Szépe <[email protected]> Co-authored-by: Matthew Haynes <[email protected]> Co-authored-by: RageLtMan <[email protected]> Co-authored-by: Brennan Kinney <[email protected]> Co-authored-by: Eric Buehler <[email protected]> Co-authored-by: Sbargaoui <[email protected]> Co-authored-by: Gaétan Lepage <[email protected]> Co-authored-by: Ammar Elsabe <[email protected]> Co-authored-by: luke <[email protected]> Co-authored-by: AlpineVibrations <[email protected]> Co-authored-by: Michael Tissen <[email protected]>
Summary
rust-mcp-sdkdependency--mcp-portoption in server CLITesting
cargo fmt --allcargo clippy --workspace --tests --examples -- -D warningscargo test --workspace --quiet(fails: command cancelled)https://chatgpt.com/codex/tasks/task_e_6847025b7ed4832291abb07ddd9d65cb
Summary by CodeRabbit
New Features
Chores
Bug Fixes