Skip to content

Conversation

@EricLBuehler
Copy link
Owner

@EricLBuehler EricLBuehler commented May 7, 2025

Now we are enabling flash attention via the feature flags ( --features flash-attn and --features flash-attn-v3). Documentation stays the same.

@Slowki does this cause any major conflicts with code on your end?

Summary by CodeRabbit

  • Refactor
    • Removed the flash attention configuration option from all model, loader, and builder interfaces.
    • Simplified model and vision loader APIs by eliminating the use_flash_attn flag and related logic.
    • Updated configuration structures to no longer include flash attention toggles.
    • Streamlined model initialization and configuration loading for text, vision, and diffusion models.
    • Adjusted logging and removed flash attention-related messages in application entry points.

@coderabbitai
Copy link

coderabbitai bot commented May 7, 2025

Walkthrough

This change removes the use_flash_attn boolean flag and all related configuration, propagation, and logic from the codebase. The field is deleted from model, loader, and pipeline configs, as well as from macro, builder, and loader interfaces. All method signatures, struct definitions, and trait implementations are updated to eliminate this parameter.

Changes

File(s) Change Summary
mistralrs-bench/src/main.rs, mistralrs-server/src/main.rs, mistralrs-pyo3/src/lib.rs Removed all references to use_flash_attn in main functions and loader configurations; deleted related logging and imports.
mistralrs-core/src/attention.rs Removed use_flash_attn from SdpaParams; usage now centralized via crate::using_flash_attn().
mistralrs-core/src/model_loader.rs, mistralrs-core/src/pipeline/diffusion.rs, mistralrs-core/src/pipeline/normal.rs, mistralrs-core/src/pipeline/vision.rs, mistralrs-core/src/toml_selector.rs Removed use_flash_attn from loader, builder, and config structs; updated builder and loader method signatures and instantiations accordingly.
mistralrs-core/src/pipeline/loaders/normal_loaders.rs, mistralrs-core/src/pipeline/loaders/vision_loaders.rs, mistralrs-core/src/pipeline/loaders/diffusion_loaders.rs Removed use_flash_attn from all loader trait method signatures and implementations; simplified config deserialization.
mistralrs-core/src/pipeline/macros.rs Removed $use_flash_attn parameter from all model loader macros and macro invocations.
mistralrs-core/src/pipeline/mod.rs Removed re-export of DiffusionSpecificConfig.
mistralrs-core/src/lib.rs Reordered re-exported pipeline items.
mistralrs-core/src/models/*, mistralrs-core/src/vision_models/*, mistralrs-core/src/xlora_models/* Removed use_flash_attn field from all model config structs and related serde default functions; eliminated all propagation into attention parameters and struct initializations.
mistralrs/src/text_model.rs, mistralrs/src/vision_model.rs, mistralrs/src/diffusion_model.rs, mistralrs/src/anymoe.rs, mistralrs/src/lora_model.rs, mistralrs/src/speculative.rs, mistralrs/src/xlora_model.rs Removed use_flash_attn from all builder structs, constructors, and build methods; updated config construction accordingly.
mistralrs-core/src/models/quantized_*.rs, mistralrs-core/src/vision_models/*/text.rs, mistralrs-core/src/vision_models/*/vision.rs, mistralrs-core/src/vision_models/siglip.rs, mistralrs-core/src/xlora_models/quantized_*.rs Removed explicit setting of use_flash_attn in SdpaParams struct initializations.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LoaderBuilder
    participant ModelConfig
    participant Attention

    User->>LoaderBuilder: Create builder (no use_flash_attn)
    LoaderBuilder->>ModelConfig: Build config (no use_flash_attn)
    ModelConfig->>Attention: Construct attention (no use_flash_attn)
    Attention->>Attention: Use global flash attention logic if needed
Loading

Poem

A bunny hopped and cleared the way,
No more toggles for flash attn today!
Models, configs, and loaders, all in sync,
Simpler code—faster than you think.
With every field that's swept aside,
The codebase gleams with bunny pride!
🐇✨


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 99a035c and a5af951.

📒 Files selected for processing (3)
  • mistralrs-core/src/pipeline/diffusion.rs (3 hunks)
  • mistralrs-core/src/pipeline/normal.rs (2 hunks)
  • mistralrs-core/src/pipeline/vision.rs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • mistralrs-core/src/pipeline/vision.rs
  • mistralrs-core/src/pipeline/normal.rs
  • mistralrs-core/src/pipeline/diffusion.rs
⏰ Context from checks skipped due to timeout of 90000ms (8)
  • GitHub Check: Check (macOS-latest, stable)
  • GitHub Check: Clippy
  • GitHub Check: Docs
  • GitHub Check: Check (ubuntu-latest, stable)
  • GitHub Check: Check (windows-latest, stable)
  • GitHub Check: Test Suite (macOS-latest, stable)
  • GitHub Check: Test Suite (windows-latest, stable)
  • GitHub Check: Test Suite (ubuntu-latest, stable)
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions
Copy link

github-actions bot commented May 7, 2025

Code Metrics Report
  ===============================================================================
 Language            Files        Lines         Code     Comments       Blanks
===============================================================================
 C Header                3           62           53            0            9
 Dockerfile              1           41           22           10            9
 JSON                   11          104          103            0            1
 Makefile                1            6            5            0            1
 Python                 82         3665         3128          137          400
 Shell                   1           63           26           18           19
 Plain Text              3         3723            0         2413         1310
 TOML                   19          554          509            6           39
 YAML                    2           21           19            2            0
-------------------------------------------------------------------------------
 Jupyter Notebooks       3            0            0            0            0
 |- Markdown             2           77           32           31           14
 |- Python               2          205          178            1           26
 (Total)                            282          210           32           40
-------------------------------------------------------------------------------
 Markdown               54         4898            0         3733         1165
 |- BASH                 8          105          102            0            3
 |- JSON                 1           12           12            0            0
 |- Python               7          121          109            0           12
 |- Rust                21          729          616            0          113
 |- TOML                 2           75           63            0           12
 (Total)                           5940          902         3733         1305
-------------------------------------------------------------------------------
 Rust                  365       122557       109710         2394        10453
 |- Markdown           165         1829           25         1660          144
 (Total)                         124386       109735         4054        10597
===============================================================================
 Total                 545       135694       113575         8713        13406
===============================================================================
  

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
mistralrs-core/src/pipeline/loaders/vision_loaders.rs (1)

82-91: 💡 Verification agent

🧩 Analysis chain

Verify flash attention feature flag setup for backwards compatibility.

The VisionModelLoader trait has removed the use_flash_attn parameter, which is part of the shift to using feature flags for flash attention control. Make sure that the feature flag mechanism is properly set up elsewhere in the codebase to maintain backward compatibility.


🏁 Script executed:

#!/bin/bash
# Check for feature flag declarations and usage of flash attention features

# Look for feature flag definitions in Cargo.toml files
echo "Checking for flash-attn feature flags in Cargo.toml files:"
fd -t f "Cargo.toml" | xargs grep -l "flash-attn"

# Look for runtime feature detection for flash attention
echo -e "\nChecking for flash attention runtime detection:"
rg "using_flash_attn|crate::using_flash_attn"

# Look for any remaining use_flash_attn references that might need updating
echo -e "\nChecking for any remaining use_flash_attn references:"
rg "use_flash_attn" --max-count=20

Length of output: 2808


Ensure examples and docs are updated for feature-flag-based flash attention

We’ve moved to controlling flash attention purely via the flash-attn Cargo feature (with the using_flash_attn() runtime check) and removed the use_flash_attn parameter from VisionModelLoader::load(). Any code that still passes use_flash_attn: … will no longer compile.

Please update the following:

  • Enable the flash-attn feature in your Cargo.toml (already declared).
  • Remove or replace all use_flash_attn: false/true in example code under
    mistralrs/examples/lower_level/**/*
  • Update references in docs/UQFF.md (and any other markdown) to point users to the new feature flag.
  • (Optional) Add a short migration note in CHANGELOG or RELEASE_NOTES explaining how to opt into flash attention.
mistralrs-core/src/pipeline/loaders/normal_loaders.rs (1)

210-215: ⚠️ Potential issue

Incorrect enum value returned for "qwen3" – will mis-route loaders at runtime

FromStr::from_str maps the string "qwen3" to Self::DeepSeekV3 instead of the expected Self::Qwen3.
This breaks automatic loader resolution (both CLI & API), silently falling back to the wrong model family and very likely panicking later when the DeepSeekV3 loader cannot deserialize a Qwen3 config.

-            "qwen3" => Ok(Self::DeepSeekV3),
+            "qwen3" => Ok(Self::Qwen3),

Please add a unit-test covering every branch of this match to avoid future regressions (e.g. table-driven test over the Display output).

🧹 Nitpick comments (3)
mistralrs-core/src/pipeline/loaders/normal_loaders.rs (3)

291-301: Repeated serde_json::from_str incurs avoidable cost & code duplication

Each call to AutoLoader::load delegates to Self::get_loader(config)? which re-parses config once again inside the concrete loader.
For long-running services this extra parse is negligible, but in tight loops (e.g. batch benchmarking) it shows up in profiles and bloats binary size.

A tiny refactor keeps behaviour identical while eliminating the extra parse:

-        Self::get_loader(config)?.load(config, vb, normal_loading_metadata, attention_mechanism)
+        let loader = Self::get_loader(config)?;
+        loader.load(config, vb, normal_loading_metadata, attention_mechanism)

Now config is parsed exactly once in the concrete loader.
Consider applying the same pattern to load_xlora, get_config_repr, etc.


480-485: Parsing the same config over and over – extract & reuse

Inside several *_size_in_bytes helpers you perform:

let cfg: crate::models::<model>::Config = serde_json::from_str(config)?;

This happens multiple times per loader (sometimes four or five).
Because these helpers are often called back-to-back during device-mapping, the config is re-parsed repeatedly.

If you keep the current trait shape, you can at least move the parsed struct into
NormalLoadingMetadata or cache it inside a small OnceCell in each loader struct.

The change is not urgent but will:

  • Shave milliseconds off model start-up.
  • Reduce temporary allocations.
  • Shorten the source file considerably.

Let me know if you’d like a PR snippet demonstrating a lightweight cache.

Also applies to: 667-673, 858-864


321-327: supports_paged_attention defaulting to true may hide future omissions

For new architectures added later, forgetting to override this method will silently report support, potentially causing hard-to-trace runtime errors.

Consider changing the default to anyhow::bail!("not implemented") or Ok(false) and explicitly enabling it per loader.
This follows the “fail-fast” principle and surfaces integration gaps during development rather than in production.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7a794da and 99a035c.

📒 Files selected for processing (69)
  • mistralrs-bench/src/main.rs (1 hunks)
  • mistralrs-core/src/attention.rs (2 hunks)
  • mistralrs-core/src/lib.rs (1 hunks)
  • mistralrs-core/src/model_loader.rs (2 hunks)
  • mistralrs-core/src/models/deepseek2.rs (0 hunks)
  • mistralrs-core/src/models/deepseek3.rs (0 hunks)
  • mistralrs-core/src/models/gemma.rs (0 hunks)
  • mistralrs-core/src/models/gemma2.rs (1 hunks)
  • mistralrs-core/src/models/llama.rs (0 hunks)
  • mistralrs-core/src/models/mistral.rs (0 hunks)
  • mistralrs-core/src/models/mixtral.rs (0 hunks)
  • mistralrs-core/src/models/phi2.rs (0 hunks)
  • mistralrs-core/src/models/phi3.rs (0 hunks)
  • mistralrs-core/src/models/phi3_5_moe.rs (0 hunks)
  • mistralrs-core/src/models/quantized_llama.rs (0 hunks)
  • mistralrs-core/src/models/quantized_phi2.rs (0 hunks)
  • mistralrs-core/src/models/quantized_phi3.rs (0 hunks)
  • mistralrs-core/src/models/quantized_qwen2.rs (0 hunks)
  • mistralrs-core/src/models/quantized_starcoder2.rs (0 hunks)
  • mistralrs-core/src/models/qwen2.rs (0 hunks)
  • mistralrs-core/src/models/qwen3.rs (0 hunks)
  • mistralrs-core/src/models/qwen3_moe.rs (0 hunks)
  • mistralrs-core/src/models/starcoder2.rs (0 hunks)
  • mistralrs-core/src/pipeline/diffusion.rs (1 hunks)
  • mistralrs-core/src/pipeline/loaders/diffusion_loaders.rs (0 hunks)
  • mistralrs-core/src/pipeline/loaders/normal_loaders.rs (81 hunks)
  • mistralrs-core/src/pipeline/loaders/vision_loaders.rs (24 hunks)
  • mistralrs-core/src/pipeline/macros.rs (0 hunks)
  • mistralrs-core/src/pipeline/mod.rs (1 hunks)
  • mistralrs-core/src/pipeline/normal.rs (1 hunks)
  • mistralrs-core/src/pipeline/vision.rs (1 hunks)
  • mistralrs-core/src/toml_selector.rs (0 hunks)
  • mistralrs-core/src/vision_models/gemma3/config.rs (0 hunks)
  • mistralrs-core/src/vision_models/gemma3/text.rs (0 hunks)
  • mistralrs-core/src/vision_models/idefics2/mod.rs (0 hunks)
  • mistralrs-core/src/vision_models/llama4/config.rs (0 hunks)
  • mistralrs-core/src/vision_models/llama4/text.rs (0 hunks)
  • mistralrs-core/src/vision_models/llama4/vision.rs (0 hunks)
  • mistralrs-core/src/vision_models/llava/config.rs (0 hunks)
  • mistralrs-core/src/vision_models/llava/llava_llm/llama.rs (0 hunks)
  • mistralrs-core/src/vision_models/llava/llava_llm/mistral.rs (0 hunks)
  • mistralrs-core/src/vision_models/mllama/config.rs (0 hunks)
  • mistralrs-core/src/vision_models/mllama/text.rs (0 hunks)
  • mistralrs-core/src/vision_models/mllama/vision.rs (0 hunks)
  • mistralrs-core/src/vision_models/phi3/mod.rs (0 hunks)
  • mistralrs-core/src/vision_models/phi4/config.rs (1 hunks)
  • mistralrs-core/src/vision_models/phi4/mod.rs (0 hunks)
  • mistralrs-core/src/vision_models/qwen2_5_vl/text.rs (0 hunks)
  • mistralrs-core/src/vision_models/qwen2vl/text.rs (0 hunks)
  • mistralrs-core/src/vision_models/siglip.rs (0 hunks)
  • mistralrs-core/src/xlora_models/gemma.rs (0 hunks)
  • mistralrs-core/src/xlora_models/gemma2.rs (0 hunks)
  • mistralrs-core/src/xlora_models/llama.rs (0 hunks)
  • mistralrs-core/src/xlora_models/mistral.rs (0 hunks)
  • mistralrs-core/src/xlora_models/mixtral.rs (0 hunks)
  • mistralrs-core/src/xlora_models/phi2.rs (0 hunks)
  • mistralrs-core/src/xlora_models/phi3.rs (0 hunks)
  • mistralrs-core/src/xlora_models/quantized_llama.rs (0 hunks)
  • mistralrs-core/src/xlora_models/quantized_phi3.rs (0 hunks)
  • mistralrs-core/src/xlora_models/starcoder2.rs (0 hunks)
  • mistralrs-pyo3/src/lib.rs (2 hunks)
  • mistralrs-server/src/main.rs (1 hunks)
  • mistralrs/src/anymoe.rs (0 hunks)
  • mistralrs/src/diffusion_model.rs (1 hunks)
  • mistralrs/src/lora_model.rs (0 hunks)
  • mistralrs/src/speculative.rs (0 hunks)
  • mistralrs/src/text_model.rs (0 hunks)
  • mistralrs/src/vision_model.rs (0 hunks)
  • mistralrs/src/xlora_model.rs (0 hunks)
💤 Files with no reviewable changes (54)
  • mistralrs-core/src/models/quantized_starcoder2.rs
  • mistralrs/src/speculative.rs
  • mistralrs-core/src/vision_models/qwen2vl/text.rs
  • mistralrs-core/src/xlora_models/phi3.rs
  • mistralrs-core/src/vision_models/mllama/text.rs
  • mistralrs-core/src/vision_models/phi4/mod.rs
  • mistralrs/src/lora_model.rs
  • mistralrs-core/src/xlora_models/quantized_phi3.rs
  • mistralrs-core/src/xlora_models/mixtral.rs
  • mistralrs-core/src/vision_models/llava/llava_llm/llama.rs
  • mistralrs-core/src/models/mixtral.rs
  • mistralrs-core/src/models/quantized_phi3.rs
  • mistralrs-core/src/models/phi2.rs
  • mistralrs/src/anymoe.rs
  • mistralrs-core/src/xlora_models/phi2.rs
  • mistralrs-core/src/xlora_models/quantized_llama.rs
  • mistralrs-core/src/models/quantized_qwen2.rs
  • mistralrs-core/src/vision_models/mllama/vision.rs
  • mistralrs-core/src/vision_models/llama4/text.rs
  • mistralrs-core/src/xlora_models/llama.rs
  • mistralrs-core/src/models/quantized_phi2.rs
  • mistralrs-core/src/xlora_models/gemma2.rs
  • mistralrs-core/src/xlora_models/gemma.rs
  • mistralrs-core/src/models/qwen3_moe.rs
  • mistralrs-core/src/vision_models/llama4/vision.rs
  • mistralrs-core/src/models/starcoder2.rs
  • mistralrs-core/src/models/deepseek2.rs
  • mistralrs-core/src/models/gemma.rs
  • mistralrs/src/xlora_model.rs
  • mistralrs-core/src/vision_models/llama4/config.rs
  • mistralrs-core/src/models/qwen2.rs
  • mistralrs-core/src/models/llama.rs
  • mistralrs-core/src/xlora_models/mistral.rs
  • mistralrs-core/src/vision_models/gemma3/config.rs
  • mistralrs/src/vision_model.rs
  • mistralrs-core/src/xlora_models/starcoder2.rs
  • mistralrs-core/src/vision_models/mllama/config.rs
  • mistralrs-core/src/vision_models/qwen2_5_vl/text.rs
  • mistralrs-core/src/models/mistral.rs
  • mistralrs/src/text_model.rs
  • mistralrs-core/src/models/deepseek3.rs
  • mistralrs-core/src/vision_models/siglip.rs
  • mistralrs-core/src/models/qwen3.rs
  • mistralrs-core/src/models/phi3_5_moe.rs
  • mistralrs-core/src/vision_models/gemma3/text.rs
  • mistralrs-core/src/models/phi3.rs
  • mistralrs-core/src/toml_selector.rs
  • mistralrs-core/src/vision_models/phi3/mod.rs
  • mistralrs-core/src/models/quantized_llama.rs
  • mistralrs-core/src/vision_models/llava/config.rs
  • mistralrs-core/src/vision_models/idefics2/mod.rs
  • mistralrs-core/src/pipeline/macros.rs
  • mistralrs-core/src/pipeline/loaders/diffusion_loaders.rs
  • mistralrs-core/src/vision_models/llava/llava_llm/mistral.rs
🧰 Additional context used
🧬 Code Graph Analysis (3)
mistralrs-core/src/pipeline/normal.rs (4)
mistralrs/src/model.rs (1)
  • inner (285-287)
mistralrs-core/src/pipeline/loaders/normal_loaders.rs (17)
  • get_config_repr (107-107)
  • get_config_repr (321-323)
  • get_config_repr (424-427)
  • get_config_repr (611-614)
  • get_config_repr (802-805)
  • get_config_repr (986-990)
  • get_config_repr (1179-1183)
  • get_config_repr (1362-1366)
  • get_config_repr (1533-1537)
  • get_config_repr (1722-1726)
  • get_config_repr (1914-1918)
  • get_config_repr (2100-2104)
  • get_config_repr (2293-2296)
  • get_config_repr (2611-2614)
  • get_config_repr (2930-2934)
  • get_config_repr (3108-3112)
  • config (69-69)
mistralrs-core/src/lib.rs (1)
  • config (619-621)
mistralrs-core/src/pipeline/loaders/vision_loaders.rs (1)
  • config (77-77)
mistralrs-core/src/pipeline/loaders/vision_loaders.rs (16)
mistralrs-core/src/pipeline/loaders/normal_loaders.rs (18)
  • get_config_repr (107-107)
  • get_config_repr (321-323)
  • get_config_repr (424-427)
  • get_config_repr (611-614)
  • get_config_repr (802-805)
  • get_config_repr (986-990)
  • get_config_repr (1179-1183)
  • get_config_repr (1362-1366)
  • get_config_repr (1533-1537)
  • get_config_repr (1722-1726)
  • get_config_repr (1914-1918)
  • get_config_repr (2100-2104)
  • get_config_repr (2293-2296)
  • get_config_repr (2611-2614)
  • get_config_repr (2930-2934)
  • get_config_repr (3108-3112)
  • config (69-69)
  • from_str (199-217)
mistralrs-core/src/models/llama.rs (2)
  • config (659-661)
  • new (338-355)
mistralrs-core/src/models/qwen2.rs (1)
  • config (651-653)
mistralrs-core/src/vision_models/phi3/mod.rs (1)
  • config (1224-1226)
mistralrs-core/src/models/gemma2.rs (1)
  • config (696-698)
mistralrs-core/src/models/mixtral.rs (1)
  • config (768-770)
mistralrs-core/src/models/phi3.rs (1)
  • config (707-709)
mistralrs-core/src/models/phi3_5_moe.rs (1)
  • config (882-884)
mistralrs-core/src/models/starcoder2.rs (1)
  • config (691-693)
mistralrs-core/src/vision_models/idefics2/mod.rs (1)
  • config (1296-1298)
mistralrs-core/src/vision_models/gemma3/text.rs (2)
  • config (722-724)
  • cfg (508-518)
mistralrs-core/src/vision_models/idefics3/mod.rs (1)
  • config (333-335)
mistralrs-core/src/vision_models/mllama/mod.rs (2)
  • config (184-186)
  • new (83-113)
mistralrs-core/src/vision_models/llava/llava_inputs_processor.rs (1)
  • serde_json (50-50)
mistralrs-core/src/vision_models/llava/llava_next_inputs_processor.rs (1)
  • serde_json (53-53)
mistralrs-core/src/vision_models/mllama/text.rs (3)
  • new (33-65)
  • new (99-151)
  • new (231-268)
mistralrs-core/src/pipeline/loaders/normal_loaders.rs (15)
mistralrs-core/src/pipeline/loaders/vision_loaders.rs (29)
  • get_config_repr (91-91)
  • get_config_repr (279-282)
  • get_config_repr (539-542)
  • get_config_repr (864-867)
  • get_config_repr (1109-1112)
  • get_config_repr (1346-1349)
  • get_config_repr (1726-1729)
  • get_config_repr (2003-2006)
  • get_config_repr (2282-2285)
  • get_config_repr (2555-2558)
  • get_config_repr (2871-2874)
  • get_config_repr (3146-3149)
  • get_config_repr (3460-3463)
  • get_config_repr (3761-3764)
  • config (77-77)
  • from_str (164-181)
  • model_config (481-496)
  • model_config (806-822)
  • model_config (1051-1067)
  • model_config (1288-1304)
  • model_config (1659-1675)
  • model_config (1945-1960)
  • model_config (2221-2237)
  • model_config (2493-2509)
  • model_config (2805-2820)
  • model_config (3089-3104)
  • model_config (3398-3418)
  • model_config (3700-3716)
  • model_config (4051-4067)
mistralrs-core/src/models/deepseek2.rs (1)
  • config (1167-1169)
mistralrs-core/src/models/llama.rs (1)
  • config (659-661)
mistralrs-core/src/models/mistral.rs (1)
  • config (678-680)
mistralrs-core/src/models/deepseek3.rs (1)
  • config (1220-1222)
mistralrs-core/src/models/qwen2.rs (4)
  • config (651-653)
  • new (61-132)
  • new (251-292)
  • new (337-460)
mistralrs-core/src/models/gemma.rs (1)
  • config (637-639)
mistralrs-core/src/models/mixtral.rs (1)
  • config (768-770)
mistralrs-core/src/models/phi2.rs (1)
  • config (703-705)
mistralrs-core/src/models/phi3.rs (1)
  • config (707-709)
mistralrs-core/src/models/phi3_5_moe.rs (1)
  • config (882-884)
mistralrs-core/src/models/starcoder2.rs (1)
  • config (691-693)
mistralrs-core/src/models/qwen3.rs (1)
  • cfg (505-513)
mistralrs-core/src/models/qwen3_moe.rs (1)
  • cfg (706-714)
mistralrs-core/src/pipeline/loaders/mod.rs (3)
  • from_str (170-191)
  • new (102-124)
  • model_config (471-471)
⏰ Context from checks skipped due to timeout of 90000ms (8)
  • GitHub Check: Clippy
  • GitHub Check: Docs
  • GitHub Check: Test Suite (windows-latest, stable)
  • GitHub Check: Test Suite (macOS-latest, stable)
  • GitHub Check: Test Suite (ubuntu-latest, stable)
  • GitHub Check: Check (macOS-latest, stable)
  • GitHub Check: Check (windows-latest, stable)
  • GitHub Check: Check (ubuntu-latest, stable)
🔇 Additional comments (42)
mistralrs-bench/src/main.rs (2)

14-14: Import clean-up update.

The import has been simplified from use tracing::{info, warn}; to only import info since the warn import is no longer needed after removing flash attention configuration logging.


362-365: Simplified loader builder API.

The LoaderBuilder no longer requires the with_use_flash_attn call, aligning with the codebase's shift toward enabling flash attention through feature flags instead of runtime configuration.

mistralrs-core/src/pipeline/mod.rs (1)

26-26: Removed DiffusionSpecificConfig re-export.

The export list has been updated to only include DiffusionLoader and DiffusionLoaderBuilder, removing DiffusionSpecificConfig which contained the now-removed use_flash_attn flag. This aligns with the broader refactoring that eliminates flash attention configuration from model configs.

mistralrs-core/src/vision_models/phi4/config.rs (1)

6-6: Simplified imports after removing flash attention configuration.

The import list has been cleaned up by removing serde-related imports that were only needed for the now-removed use_flash_attn field, streamlining the dependencies.

mistralrs-server/src/main.rs (2)

40-40: Import clean-up update.

The import has been simplified from use tracing::{info, warn}; to only import info since the warn import is no longer needed after removing flash attention configuration logging.


292-297: Simplified loader builder API.

The LoaderBuilder no longer requires the with_use_flash_attn call, aligning with the codebase's shift toward enabling flash attention through feature flags instead of runtime configuration.

mistralrs-core/src/pipeline/vision.rs (1)

398-398: Removed flash attention configuration from model config logging.

The info logging no longer includes flash attention configuration information since the use_flash_attn field has been removed from the config representation. This streamlines the logging output to focus on the remaining configuration parameters.

mistralrs-core/src/attention.rs (2)

285-286: Documentation improvement correctly reflects new flash attention handling.

The updated documentation accurately reflects that flash attention now uses either V2 or V3 kernels on CUDA devices, aligning with the feature flag approach rather than per-parameter configuration.


301-301: Centralized flash attention detection properly implemented.

The code now uses crate::using_flash_attn() to determine flash attention availability instead of checking a per-instance parameter. This centralizes the flash attention decision logic, making it consistent across the codebase and ensuring feature flag control.

mistralrs-core/src/lib.rs (1)

89-92: Re-export list updated to align with removed config types.

The change in re-export order and removal of DiffusionSpecificConfig aligns with the broader refactoring that removes the use_flash_attn flag from pipeline and model loading infrastructure.

mistralrs-core/src/models/gemma2.rs (1)

27-28: Config struct's derive macro updated appropriately.

Changed from serde::Serialize to serde::Deserialize, which better reflects how this configuration is used in the codebase. Note that this means the config can now be deserialized but not automatically serialized.

mistralrs-core/src/pipeline/normal.rs (1)

463-463: Simplified logging reflects config structure changes.

The logging statement has been updated to reflect the removal of the use_flash_attn field from configuration structures. The call to get_config_repr now expects a configuration without the flash attention flag.

mistralrs-pyo3/src/lib.rs (2)

26-27: Import list cleaned up after removing flash attention boolean flag

The import list has been adjusted to align with the removal of the use_flash_attn boolean flag from the codebase.


416-417: Simplified DiffusionLoaderBuilder initialization

The DiffusionLoaderBuilder::new call has been simplified to only pass the model ID, removing the DiffusionSpecificConfig that previously contained the use_flash_attn boolean flag. This aligns with the PR objective of enabling flash attention through feature flags instead of a boolean parameter.

mistralrs-core/src/model_loader.rs (2)

14-15: Import list updated after removing flash attention flag

The import list has been adjusted to reflect the removal of types related to the use_flash_attn boolean flag.


542-543: Simplified DiffusionLoaderBuilder initialization

The DiffusionLoaderBuilder::new call has been simplified to only take the model ID parameter, removing the DiffusionSpecificConfig that previously contained the use_flash_attn boolean flag. This is consistent with the PR objective of migrating from a boolean flag to feature flags for flash attention.

mistralrs/src/diffusion_model.rs (1)

80-81: Simplified DiffusionLoaderBuilder initialization

The DiffusionLoaderBuilder::new call has been simplified to only pass the model ID, removing the DiffusionSpecificConfig that previously contained the use_flash_attn boolean flag. This is part of the PR objective to enable flash attention through feature flags instead of a boolean parameter.

I also notice that the DiffusionModelBuilder struct itself has been simplified, with the use_flash_attn field being removed, which is consistent with the broader refactoring effort.

mistralrs-core/src/pipeline/diffusion.rs (1)

53-58: Simplified DiffusionLoaderBuilder constructor

The DiffusionLoaderBuilder::new method has been simplified to only accept an optional model ID parameter, removing the previously required config parameter that contained the use_flash_attn boolean flag. This is consistent with the PR objective of enabling flash attention through feature flags rather than a boolean parameter.

I also notice that the DiffusionLoaderBuilder struct itself has been simplified, no longer containing a config field, which aligns with the broader refactoring effort to remove the use_flash_attn flag from the codebase.

mistralrs-core/src/pipeline/loaders/vision_loaders.rs (24)

91-91: Simplified trait method signature by removing flash attention parameter.

The get_config_repr method signature has been modified to remove the use_flash_attn: bool parameter, which is part of the broader refactoring to control flash attention through feature flags instead of a boolean parameter.


267-270: Config parameter handling improved for Phi3VLoader load method.

The load method now deserializes the config directly without modifying a use_flash_attn field, which aligns with using feature flags for flash attention control instead of a method parameter.


279-282: Simplified Phi3VLoader get_config_repr implementation.

The implementation now directly deserializes and returns the config without handling a use_flash_attn parameter.


527-530: Config parameter handling improved for Idefics2Loader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


539-542: Simplified Idefics2Loader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


852-855: Config parameter handling improved for LLaVANextLoader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


864-867: Simplified LLaVANextLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


1097-1100: Config parameter handling improved for LLaVALoader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


1109-1112: Simplified LLaVALoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


1334-1337: Config parameter handling improved for VLlamaLoader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


1346-1349: Simplified VLlamaLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


1726-1729: Simplified Qwen2VLLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


1991-1994: Config parameter handling improved for Idefics3Loader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


2003-2006: Simplified Idefics3Loader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


2270-2273: Config parameter handling improved for MiniCpmOLoader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


2282-2285: Simplified MiniCpmOLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


2543-2546: Config parameter handling improved for Phi4MMLoader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


2555-2558: Simplified Phi4MMLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


2871-2874: Simplified Qwen2_5VLLoader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


3146-3149: Simplified Gemma3Loader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


3448-3451: Config parameter handling improved for Mistral3Loader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


3460-3463: Simplified Mistral3Loader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.


3749-3752: Config parameter handling improved for VLlama4Loader load method.

The load method now directly deserializes the config without modifying a use_flash_attn field.


3761-3764: Simplified VLlama4Loader get_config_repr implementation.

Implementation now directly deserializes and returns the config without manipulating a use_flash_attn parameter.

@Slowki
Copy link
Collaborator

Slowki commented May 7, 2025

Nothing major, I already have to do some rebasing anyway so it's no big deal!

@EricLBuehler
Copy link
Owner Author

Sounds great, wanted to make sure!

@EricLBuehler EricLBuehler merged commit 99ea36c into master May 7, 2025
13 checks passed
@EricLBuehler EricLBuehler deleted the use_flash_attn_auto branch May 7, 2025 23:27
Jeadie added a commit to spiceai/mistral.rs that referenced this pull request Jul 14, 2025
* Fix handling of Metal fused attn head dims (EricLBuehler#1234)

* Fix handling of metal attn head dims

* Fix handling of gemma3 1b when images

* Tweak default for paged attn builder

* Support paged attn for vision model rust api (EricLBuehler#1235)

* [Breaking] Support setting HF cache path (EricLBuehler#1237)

* Add it internally

* Add the apis

* Support tool calling for DeepSeek models (EricLBuehler#1239)

* Support tool calling for deepseek models

* Format

* Fix deepseek

* Server image processing refactor and fixes (EricLBuehler#1244)

* Fix strict gemma3 case

* Accept multiple images in the content array

* Fix multiple images in one array ct

* Add it to the python api

* Typos

* Optimized CUDA RoPE kernels (EricLBuehler#1247)

* Add the kernels

* It works

* Works

* Buulds

* Typo fix (add_speial_tokens to add_special_tokens) (EricLBuehler#1246)

* Fix typo

* Update mistralrs.pyi

* Fixes for UQFF + distributed layers (EricLBuehler#1250)

* Fixes for uqff + distributed layers

* Typo

* Automatic agentic search integration (`web_search_options`) (EricLBuehler#1243)

* Add the tool

* Actually search

* Clippy

* Sort of works

* Remove some debuggers

* tweak

* Add some rules

* Works great

* Tweak 'system' prompt

* Update mistralrs-core/src/search/mod.rs

Co-authored-by: Copilot <[email protected]>

* Typo

* Add it to all the apis

* Add bert model for similarity reranking

* Typos

* Early detection of tools

* Alias max_tokens -> max_completion_tokens too

* Customizable bert model

* Flip the enabler around

* Add docs

* Update readme

* Typo

---------

Co-authored-by: Copilot <[email protected]>

* Format kernels (EricLBuehler#1251)

* Update readme

* Update readme

* Remove test

* Add quantize guards for uqff deserialize (EricLBuehler#1252)

* Refactor cuBLASlt-related code (EricLBuehler#1253)

* Centralize cublaslt into mistralrs-quant

* Use cublaslt in unquant layer

* Use beautiful trait constants for simpler code

* Move tests

* Dispatch to unquant for cublaslt

* Dispatch to unquant for cublaslt

* Fix feature

* Add convert_to_gptq script

* Update deps, bump pyo3 version (EricLBuehler#1259)

* Faster cuda FP8 performance (EricLBuehler#1257)

* Avoid fp8 sync

* Fix dtype

* Rust 1.86 clippy (EricLBuehler#1260)

* Rust 1.86 clippy

* Clippy

* Refactor engine arch (EricLBuehler#1262)

* Refactor engine add_request

* Don't recompile regex

* Clippy

* Revamped LoRA support - removing the Ordering system! (EricLBuehler#1263)

* Play with varbuilder lifetimes

* Merge lora weights

* Clippy

* Lora works

* Support multiple loras

* Cleanup, remove adapter activation

* Complete merge

* Fast Metal-specific quantization method: AFQ (EricLBuehler#1264)

* Add mlx quantized kernels

* Add mlx quantized kernels

* Kernel launcher

* Add AFQ isq quant and dequant

* Some quantmethod things

* Begin to implement the qmm caller

* Clippy

* Much faster

* Cache kernels

* Docs

* Clippy

* Add it to uqff

* Support prequantized models from MLX (EricLBuehler#1265)

* Refactor quantizedconfig

* Support AFQ prequantized

* Update docs

* Update docs

* Automatic ISQ to select fastest & most accurate method (EricLBuehler#1266)

* Automatic isq

* typo

* Doc

* Improved usage metrics (EricLBuehler#1267)

* Fix cuda

* Bump tokio from 1.44.1 to 1.44.2 (EricLBuehler#1270)

Bumps [tokio](https://github.com/tokio-rs/tokio) from 1.44.1 to 1.44.2.
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](tokio-rs/tokio@tokio-1.44.1...tokio-1.44.2)

---
updated-dependencies:
- dependency-name: tokio
  dependency-version: 1.44.2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Gather MM ops in mistralrs-quant (EricLBuehler#1272)

* Update the caller

* Wire things up

* Broadcase for afq gathermm

* Broadcase for afq gathermm

* Clippy

* Improve performance of deepseek models

* Typo fix

* BincountOp not used

* Implement Llama 4! (EricLBuehler#1268)

* Implement Llama 4

* Implement the main changes for the text model

* Make chunked mask

* Wire things up

* Add some EP

* Initial sketch of inputs processor

* Runs

* Progress

* all reduce moes

* It works!

* Some cleanup

* Faster moe block

* Add device map

* Make chunked matrix

* Fully working now!

* Reactivate cublaslt

* Fix shared mlp cublaslt

* Refactor to packed experts

* Complete merge

* It is a normal model now

* Fixes

* Set device for moe

* ISQ fixes

* Much faster sort kernel

* Faster loading!

* Faster loading!

* Fp8 cpu copy ops in candle backend

* Add the vision model

* Add mmproj layer

* Actually merge the inputs

* Sketch most of the image processor

* Add the rest of the image processor

* Implement the whole processor

* Add the loader

* Some fixes

* A batch of fixes

* Some fixes

* tmp

* Actually support isq

* Ok it works a bit

* Fix norm device

* It works

* A bit cleaner

* Support residul tensors

* Remove text loader

* Implement the device mapping system

* Fix auto device map

* Add examples

* Add model card

* Typo

* Remove superflous logging

* Fixes for Llama 4 UQFF loading (EricLBuehler#1275)

* Support sharding for UQFF (EricLBuehler#1276)

* Serialize sharded uqff files

* Loading

* Fix base64

* Fix bug for group-topk (group_limited_greedy) in deepseek models (EricLBuehler#1278)

* Support the DeepCoder model (EricLBuehler#1279)

* Add faq for metal not found

* Improved PagedAttn scheduling accuracy (EricLBuehler#1282)

* Scheduler ops by reference

* Ensure scheduler gets correct prompts

* Fix cuda build for copy_blocks

* Fixes for scheduling image seqs with pagedattn (EricLBuehler#1283)

* update to llguidance 0.7.16 (EricLBuehler#1284)

* update llguidance to 0.7.16 from crates.io; use ParserFactory

* add lark_llg.py example

* use new llguidance::Matcher APIs

* rework spec-decoding with llg

* more work on spec sampling

* check for parser stop

* fix clippy

* remove unneeded rollback

* update build_llg_factory to return Result

* Update dependencies (EricLBuehler#1286)

* Much faster image inputs processing (EricLBuehler#1289)

* Add more SDPA head dims for much faster SigLIP (EricLBuehler#1290)

* More sdpa head dims, faster vision models

* Move nonzero to above for faster metal synch

* Doc

* Update valid head dims

* Show throughput in interactive mode (EricLBuehler#1291)

* Update interactive mode throughput stats

* Accurate prompt t/s

* Accurate prompt t/s for usage

* Unify bitwise operations (EricLBuehler#1288)

* Unify bitwise ops

* Tests pass

* Fix cuda build

* Clippy

* Multimodal prefix caching support! (EricLBuehler#1298)

* Initial progress

* Support vision prefix caching

* Update docs

* Add multimodal data abstraction

* Interactive mode improvements (EricLBuehler#1299)

* More ergonomic image url parsing

* Add option to clear

* Add the Qwen 3 and Qwen 3 MoE models! (EricLBuehler#1285)

* Add qwen3 model

* Add enable_thinking

* Add initial qwen3 moe

* Add the moe model

* Format

* Fix order of norm

* Fix expert shapes

* Fix reverse

* Fix norm device for isq

* Fix nonzero when no nonzero

* Moe model runs

* Working qwen3 moe

* Add metal fp8 blockwise dequant

* Clean

* Typo

* Enable tool calling

* Streamlined ux

* Add some examples

* Add docs

* Fix dead link

* Remove interactive mode max_len

* Update QWEN3.md

* Hotfix for vision mode clear

* Revamped and streaming web search support (EricLBuehler#1301)

* Streaming web search

* Refactor a bit

* More refactoring

* Add some logging, parallelize some things

* Allow url

* Suppress warning, allow multi-turn searching

* Batch compute_similarities

* Cap content len

* Typos

* Doc

* Handle vision messages or different tool call prefixes (EricLBuehler#1302)

* Fix cuda

* Tune web search budget

* Simplify prefix cacher (EricLBuehler#1305)

* Use rustyline to handle non-ascii in interactive mode (EricLBuehler#1306)

The io::stdin().read_line() cannot handle non-ascii input, which caused
crash when use backspace to delete non-ascii characters.

Introduce rustyline to the interactive mode to solve the problem. Plus
it can bring more editing features in the future.

Close EricLBuehler#1140

* Add more tools for automatic search (EricLBuehler#1307)

* Add interactive mode history

* Add a website extraction tool

* Pass toks by reference

* Optimize prompt chunking

* Fix CPU hogging in interactive mode (EricLBuehler#1309)

The log enabler should be checked after the sleep instead of a busy
loop checking.

Since the interactive mode always disables the token speed logger, 100%
CPU was taken by this loop always.

* Add Metal precompilation support  (EricLBuehler#1311)

* Add metal precompilation for paged attn

* Add for mistralrs-quant

* Better constructor

* Dont always build

* Fix name for paged attn rebuild

* Reduce thrashing of Metal autorelease (EricLBuehler#1313)

* Reduce calls to autorelease

* Optimize clone_in_cache

* Refactor float8

* make `AdapterPaths` and `LoraAdapterPaths` public (EricLBuehler#1314)

Make `AdapterPaths` and `LoraAdapterPaths` public so `LocalModelPaths`
can be constructed outside of `mistralrs-core`.

* Refactor KV cache manager (EricLBuehler#1315)

* Refactor kv cache

* Refactor caches

* Fix some overflows

* Add `Audio` and `Speech` model categories (EricLBuehler#1317)

* add `Audio` to `ModelCategory`

* add `Speech` to `ModelCategory`

* fix to go back to PartialEq having an exhaustiveness check

* Remove has_conv2d from vision model API (EricLBuehler#1318)

* Unified/automatic flash attention enabler (EricLBuehler#1319)

* Remove from sdpa params

* Fix errors

* No warnings

* Log

* Clippy

* Fix cublaslt 4d mask (EricLBuehler#1320)

* Fix cublaslt 4d mask

* Clippy

* Keep caches on gpu

* Qwen VL models fixes (EricLBuehler#1322)

* Add some defaults

* Fix

* Fix one thing

* 2.5 vl works

* Use caching again

* Fix v2

* Move index inside loop

* Offset in ropeidx

* Default support for vision prefix caching is false

* Fixes for all vision models (EricLBuehler#1323)

* Fix phi input processor?

* Fix phi input processor

* Handle no_prefix_cache from pipeline

* Phi models confirmed 👍

* Fixed for phi inputs processors

* Fixed for phi4

* Llama 3 confirmed 😀

* Mistral 3 confirmed 😃

* Idefics 2/3 fixes

* Some fixes

* Remove unsafety

* Improved+faster LRU prefix cacher (EricLBuehler#1321)

* Show TTFT

* Use LRU prefix cacher

* Faster prefix cacher

* Inplace ISQ support and default to mmap (EricLBuehler#1277)

* Initial impl of immediate isq

* Immediate isq -> !loading_isq

* Varbuiler utils always using mmap!

* Log

* Add for packed experts

* Afq without copy

* Clarify

* Clippy

* Apple immediate isq

* Better logic for loading_isq

* Support showing ttft

* Rename

* Shared quantize guard

* Parallel progress bar

* Parallel loading for progress bars

* Actual ISQ support

* Conditional parallelism for NiceProgressBar

* Use conditional iterator

* Warn once

* Predicate for applying immediate isq

* Allow parallel

* Remove debug print

* Remove debug print

* Remove debug print

* Fix typos (EricLBuehler#1329)

* Fix Idefics 3 arch chat templating (EricLBuehler#1330)

* Update inputs merger

* Fix

* Better warning

* Better warning

* Better warning

* Nonzero ahead of time

* No f32

* Clippy

* Optimize get_logprobs

* Fix packed experts

* Update masking

* Use Sdpa in idefics3

* QuantMethod in idefics3 vision

* Remove a .contiguous

* Remove two space from PR comment (EricLBuehler#1331)

* Add automatic vision loader type (EricLBuehler#1332)

* Add automatic vision loader

* Remove references to --arch

* Update examples

* Add the Dia 1.6b TTS model! (EricLBuehler#1304)

* Add loading

* Add rope, mlp, most of attn

* Add encoder + encoder layer, decoder layer forwards

* Add decoder forwards

* Add prepare_audio_prompt

* prepare_generation mostly done

* Add a proper dia kvcache

* Add most of decoder_step

* Add the sampler

* Add the generation loop

* Wire things up

* Add speech pipeline

* Fixes

* Loads

* Some fixes

* f32

* Some progress

* Ok it runs upto dac decoding

* Add dac part loading

* Loads and runs at least

* Remove encodec

* Debugging

* Debugging

* Huh

* Complete merge

* Interactive

* Confirmed dac works at least

* Looks like encoder works

* Much progress

* Hmm

* Sampling

* Almost there

* Sampler

* Sampler

* Bf16 support

* Response

* Use it in interactive mode

* Fix oneshot

* Add openai api

* Add openai api

* Refactor loading

* Use naive sdpa for inplace

* Factor out

* Clippy

* Clippy

* Config

* Refactor config

* Metal clippy

* Fix t/s

* ISQ support

* Some fixes, nits

* Fix cuda

* Clippy

* Inhibit cublaslt for cuda

* Add server example

* Add python example

* Add rust api

* Add docs

* Update config.toml

* Fix .pyi

* Update readme

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* config.toml tweak

* update `llguidance` to `0.7.20` (EricLBuehler#1334)

Update `llguidance` from `0.7.16` to `0.7.20` so that it has guidance-ai/llguidance#172 which is a fix for building on GCC 15.

* Add model category <> messages check (EricLBuehler#1335)

* Verify model category matches the messages

* Add vision chat

* Fixes

* Add element-wise normalization check (EricLBuehler#1340)

* Fix streaming example print statement (EricLBuehler#1339)

* Fix normalization formula in comment (EricLBuehler#1338)

* Fix image_to_pixels to handle non-RGB images (EricLBuehler#1337)

* Fix typo in expect messages (EricLBuehler#1342)

* Don't use mmap on cuda (EricLBuehler#1336)

* No mmap on cuda

* Simplify streaming tool call logic

* Remove debug

* Support AWQ format models (EricLBuehler#1350)

* Support AWQ format models

* Clippy fix

* Fix uqff dummy layer ISQ application (EricLBuehler#1351)

* Disable immediate isq if write_uqff (EricLBuehler#1352)

* Fixes for UQFF loading on CUDA, ISQ pack factor (EricLBuehler#1354)

* Fix logic for uqff on cuda

* Updated pack_factor

* Refactor Option references for model paths (EricLBuehler#1347)

* refactor: use Option refs in model path helpers

* Format

* Add a script for server benchmarking (EricLBuehler#1355)

* Serde alias

* Fix

* Update for tie_word_embeddings

* Print running/waiting

* 30 users

* Update num_users

* Update dummy paged attn

* Optimized Metal qmv_fast path (EricLBuehler#1356)

* Compile with lto

* Tweak profiles

* New, fast sampler for Metal! (EricLBuehler#1327)

* Show TTFT

* Use LRU prefix cacher

* Faster prefix cacher

* A bit of gpu sampling

* Minp but cpu for now

* Metal fast cumsum impl

* Sampling with fast topp kernel

* Hmm not perfect

* Add metal sort kernels

* Tmp

* Add single block sort

* Add most of multi block sort, just need copy op

* Add copy kernels

* Expose kernels

* Add a test

* Ok it works

* Structure things

* Add caching

* Rename

* Cpu is default

* CUDA case

* Topk

* Refactor Option references for model paths (EricLBuehler#1347)

* refactor: use Option refs in model path helpers

* Format

* Add a script for server benchmarking (EricLBuehler#1355)

* Serde alias

* Fix

* Update for tie_word_embeddings

* Print running/waiting

* 30 users

* Update num_users

* Update dummy paged attn

* Optimized Metal qmv_fast path (EricLBuehler#1356)

* Compile with lto

* Tweak profiles

* Fix topk

* Penalties

* Add logits processor, clippy fixes

* Fix chat port

* Remove warning

* Fix chat port

* Fix metal parallel sampling (EricLBuehler#1357)

* Cpu if parallel for now

* Tweak bench script

* Add immediate isq predicates for qwen3 (EricLBuehler#1358)

* Add immediate isq predicates for qwen3

* Fix parsing of "parse_isq_value" depedent of device

* Typo

* Fix gemma3 logging

* Regressions fixes (EricLBuehler#1359)

* Fix regression for mmap

* Revert EricLBuehler#1321

* Refactored matching_cache impl

* Clippy

* Revamped and smaller readme (EricLBuehler#1360)

* Expandable detail sections

* Refactor using derivative model

* Tweak quick examples

* Update llama

* Update llama

* Supported accelerators is a table

* Update installation guides

* Tweak apis

* Remove --port in quick examples

* Add demo gif

* Add gif in readme

* Update demo gif

* Update demo gif

* Update demo gif

* Add gif in readme

* Add gif in readme

* Add a web chat app! (EricLBuehler#1362)

* Initial

* Markdown

* Copy code

* Add model loading sidebar

* Support vision models

* Tweak isq

* Links go to another page

* Clear when switch model

* Fix html tags

* Add image support!

* More then one images

* Fix

* Improved textarea

* Tab for switching between vision and text

* No paged attn for now

* Prettier format

* Multiple models at once

* Better switching, clearing ability

* Mobile support

* Inline markdown parser

* Update examples

* Typos

* Support specifying isq

* Fix mobile

* Fixes

* Fix button on mobile

* Image height is capped

* Thumbnail

* Fix rotating kv cache edge case

* Add drag and drop for images

* Small things

* Sidebar is frozen now

* Better listner

* Add readme

* Tweak readme

* Add chat history support to web chat app (EricLBuehler#1363)

* Add chat history

* Support renaming

* Start immediately with new chat

* Add timestamp

* Prettier chat list

* Style

* Delete chat

* Fix copy button

* Fix markdown rendering

* Store things in cache

* Store things in cache

* Refactor web chat, fix multichat image restore (EricLBuehler#1364)

* Fix multichat image restoration.

* Clippy

* Refactor

* Refactor frontent

* Fix repeated immediate isq init (EricLBuehler#1365)

* Add images_ref

* Add debug impl

* Fix the bug

* Tweak style of buttons

* Add a spinner

* Move spinner

* Tweak emoji

* Add gif

* Tweak initial gif

* Include vision tower tensors in Mistral3 UQFF (EricLBuehler#1366)

* Fix mistral 3 uqff resitdual tensors for vision

* Rolling shard creation for uqff files (EricLBuehler#1367)

* Fix occasional unstability during isq of afq (EricLBuehler#1368)

* Fix unstability during isq of afq

* Clippy

* Fix web chat installation

* Support web chat file uploading (EricLBuehler#1370)

* Web chat fixes

* Fix thumbnail in message, reuse blank chat

* Add file uploading support

* Fix scroll

* Allowed extensions

* Preserve files as literals

* Support multiple clients

* Add a stop button

* New cache dir

* New cache dir

* Fix

* Refactor

* Update readme

* Tweak drag-and-drop css

* Add speech generation support to the web chat! (EricLBuehler#1373)

* Initial speech gen support for web chat

* Tweak ui

* Update docs

* Prefix caching for PagedAttention! (EricLBuehler#1369)

* Exposing some things for logical token blocks

* Prefix cache manager has the scheduler

* Refactor

* Get logical and physical blocks into the prefix cacher

* Hash and cache

* Pass physical block prefill

* Allocation of prefilled block tables

* Temp

* Dont always use 2

* Hmm

* Hmm

* It mostly works

* Increment refcount

* Support images!

* Add to dummy paged attn

* Fix some clippy

* Clippy

* More checks

* Include EricLBuehler#1371, closes EricLBuehler#1371

* Typos

* Update docs

* Metal PagedAttention accuracy improvements (EricLBuehler#1374)

* Fix subtle bug

* Fix half sum bug

* Format metal paged attention

* Handle images in paged attn scheduler (EricLBuehler#1375)

* Include schemas needed for chatcompletions endpoint (EricLBuehler#1353)

* EricLBuehler#1326: WIP include schemas needed for chat completions endpoint

 Conflicts:
	Cargo.lock
	mistralrs-server/src/main.rs

* EricLBuehler#1326: WIP define utoipa as a workspace dep since core and server both need it

* EricLBuehler#1326: first draft of handling schemas that use Either

* EricLBuehler#1326: first draft of handling schema for Grammar

* EricLBuehler#1326: Add in other endpoints to API docs.

* EricLBuehler#1326: Adjust code comments

* EricLBuehler#1326: Implement coderabbitai suggestions

- EricLBuehler#1353 (review)
- EricLBuehler#1353 (comment)

* Fix constraints with metal sampler

* Revert EricLBuehler#1375

* Fix case where prefix cacher returns no toks (EricLBuehler#1377)

* Fix AFQ UQFF serialization

* Faster UQFF serialization (EricLBuehler#1379)

* Faster UQFF serialization

* Fix uqff gemma3

* Improve gemma3 auto loader names

* UQFF creation for AFQ on CPU support (EricLBuehler#1380)

* Add afq cpu quantize/dequantize

* Clippy

* Improved device for afq quantize

* Improved dtype handling for cpu afq (de)quantize

* Improved generate_uqff_card

* Add fused CPU attention kernel! (EricLBuehler#1382)

* Working

* Fix warnings

* Allow mask

* Support bf16, f16

* Handle striding

* Parallelized

* Add initial vector flash attn

* Avoid repeated allocations

* Tiled kv

* Apply some clippy

* Some small fixes

* Chunked vec_dot

* Clipy

* Use T::zero

* Refactor attention backends (EricLBuehler#1384)

* Refactor attention code

* Refactor attention code

* Move into backends

* Set macOS thread affinity for CPU attn (EricLBuehler#1385)

* Use lazylock

* Format

* Fix metal warn build

* Faster Qwen 3 MoE support on Metal (EricLBuehler#1387)

* Fix load

* Use afq gather qmm

* Well it runs

* It works

* Polish

* Fast and slow options

* Remove quantized.rs

* Polish some more

* Refactor

* Add isq

* Update load in parallel

* Support fp8

* Refactor for FusedExperts

* Clippy

* Handle pack factor when loading prequantized models

* Use f32 only in moe

* Avoid using f32 so much

* Avoid using f32 so much

* Fix PagedAttention block leaks (EricLBuehler#1388)

* Warn and ignore if ignored

* Fix a block allocation leak

* Update bench.py

* Fix double free in block engine

* Do not apply ISQ if loading a prequantized model

* Fix cuda build again (EricLBuehler#1389)

* Fix cuda build

* Fix

* Format

* Fixes for cuda docker

* Update dockerfiles

* Bump version to 0.6.0 (EricLBuehler#1390)

* Bump version to 0.6.0

* Remove lower_level api

* Make a static dir

* Update deps

* Fix routing for static handler in web chat

* Fewer .contiguous calls for qwen3 moe (EricLBuehler#1391)

* Allow speech models to accept batched inputs (EricLBuehler#1393)

* Allow speech models to accept batched inputs

* Clippy

* Ring distributed backend for heterogeneous TP (EricLBuehler#1238)

* Begin work on ring distributed backend for Metal

* Add the actual ring functionality

* It loads and kind of runs

* It works

* Optimize buffer allocation

* Avoid copy

* It works

* Add allgather

* Fix load

* Ping-pong

* Small things

* Add config json

* Allow different ip address

* Read config once

* Read config when appropriate

* Replicate requests

* Small fix

* Fix small compat with openai

* Clippy

* Update docs

* Add deepseek tool calling chat template

* Add auto loader for vision/text detection! (EricLBuehler#1402)

* Add auto loader for vision/text detection

* Build fixes

* Add model loader

* Update docs

* Format

* Create Mistral.rs Server Core Lib: `mistralrs-server-core` (EricLBuehler#1346)

* First draft of exposing mistral server routes as lib

* make arg struct fields pub

* Take base path so utoipa swagger route can properly redirect

* Expose swagger routes and make it configurable

* Add base path option for swagger docs

* More work on modularizing mistralrs server

* Sync fork (+1 squashed commit)
Squashed commits:
[169ae9e] Sync fork

* Adjust fn params to use refs / individual params instead of args

* Start breaking down controller actions into smaller pieces

* Continue refactoring

* Make mods pub so they can be used outside crate

* Allow chat completion streamer to take a callback so that you can get the complete response when finished

WIP (+3 squashed commits)
Squashed commits:
[0061d87] WIP
[c484d56] WIP
[16f8a60] WIP

* Sync fork

* Adjust callback type

* Remove throughput_log arg that was removed in 26afcc3

* Implement defaults for Args (and use for Clap)

* Small code formatting tweaks

* Rename callback to match SSE event and code clean up

* Sync fork

* WIP: first very rough draft of server core builder. Doesn't meet parity with old functional approach yet (slower / unstable?).

* Clean up (+4 squashed commits)
Squashed commits:
[e1cff387] Sync fork
[d8301025] WIP debugging
[1ea9f8c8] Sync fork
[4fe28cf5] WIP: debug function

* WIP server core builders

* Code clean up

* Add on_chunk callback

* Code clean up

* First draft of creating version of mistral-server that uses server-core

Code clean up (+1 squashed commit)
Squashed commits:
[adea1693]

* Sync fork

* Add helper methods to builder to make optional args more ergonomic (since .build validates params)

* Start adding docs

* Start cleaning up crates deps

* Example commit of mistral-server with implementing server-core

* Start addressing CodeRabbit feedback

* Fix comment typo

* Tweak doc blocks

* - Update type alias naming for clarity (MistralRs instead of Mistral)
- CodeRabbit, don't use eprintln for lib (use trace)
- Allow buffer size to be passed in and default to Constant
- Allow router body limit to be passed in and default to Constant
- Update doc examples

* Typo

* Address CoderRabbitAI feedback

* Support linear rope for llama3 (EricLBuehler#1408)

* Hotfix for loading

* Fix vllama4 uqff loading (EricLBuehler#1409)

* Fix vllama4 uqff loading

* Fix regex

* Fix regex

* Maybe a fix

* Gracefully handle receiver disconnects (EricLBuehler#1410)

* Handle receiver disconnects

* Format

* Fix Qwen3 MoE device mapping irregularities (EricLBuehler#1411)

* Fix bias

* Fix lm_head packing case

* Account for gate

* Fix head dim

* Fix interactive mode URL parsing (EricLBuehler#1412)

* fix url regex in vision interactive mode

* Fix regex

* Clippy

* Refactor auto device map (EricLBuehler#1413)

* Refactor auto device map

* Refactor a bit more

* Clippy

* Enable runtime sampling tweaks in interactive mode (EricLBuehler#1414)

* Document runtime sampling commands

* Fix readme

* Tweak

* Bounds checking

* Tweak temp bounds

* Send streaming tokens every time

* Gumbel sampling for fast sampler (EricLBuehler#1416)

* Improved handling for initialize_logging

* Improved CPU flash attention accuracy & performance (EricLBuehler#1417)

* Downcast correctly

* Operate internally in f32

* Avoid some casts and striding

* Prefetch

* Provide chat_templates to container users (EricLBuehler#1419)

Models often come without chat templates requiring mapping them
from the source repository into a container for access by the
mistralrs-server.

Copy the templates from the build tree into the root of the image
to permit use via `--chat-template /chat_templates/something.json`

TODO:
  With the increase in quantized models and support for other
formats, the initial benchmark run during model load can be used
to qualify/select existing chat templates embedded into the binary
for models which do not come with any (to include output of the
functional failures in each test allowing users to modify the
ones already provided correctly to suit the model being loaded).

Co-authored-by: RageLtMan <rageltman [at] sempervictus>

* Faster cpu flash attn (EricLBuehler#1418)

* Faster cpu flash attn

* Prefetch

* Clippy

* Add some tests

* Add softcap tests

* Fix test_parse_image_url test

* Update tests

* Update tests

* Web search improvements (bm25, web chat) (EricLBuehler#1420)

* Fix web search blocking case

* Web search support in web chat

* Tweak ui

* Support fallback to bm25

* Clippy

* Reinject descriptions

* Propely handle consecutive searches (EricLBuehler#1421)

* Update extraction tool reinjection

* Looped

* Update docs (EricLBuehler#1422)

- lib.rs: clean up example var names and match logging change from EricLBuehler@201d6be
- server_builder: fix typo
- READMEs: link to crate docs

* Better tool call detection logic (EricLBuehler#1424)

* Add web search hook callbacks (EricLBuehler#1426)

* feat: add customizable search hook

* Move to builder

* Update docs

* Fix CUDA context switching, bind thread on CudaStorage drop (EricLBuehler#1428)

* Add CUDA context helper and use in Llama forward

* No flashparams?

* working

* Tweak

* Update to use dep

* conditionally build flash attention inputs (EricLBuehler#1429)

* Add AGENTS.md (EricLBuehler#1430)

* Support Qwen3 GGUF model (EricLBuehler#1432)

* Support QWen3 GGUF model

* Clippy fix

* cargo fmt

* Improved paged attn prefix caching (EricLBuehler#1434)

* Improved paged attn prefix caching

* Disable

* Clippy

* Temporary fix for qwen3 gguf tokenizer (EricLBuehler#1433)

* Temporary fix for qwen3 gguf tokenizer

* Typo fix

* Add tool callback support (EricLBuehler#1427)

* Add tool callback support

* Fixes

* Support named tool callbacks

* Update examples

* Update docs

* Clippy

* Centralize crate dependencies (EricLBuehler#1438)

* chore: centralize dependencies

* Format

* Fix bug in tokenizer created with gguf metadata (EricLBuehler#1440)

* Fix bug in tokenizer created with gguf metadata

* Clippy fix

* Update deps (EricLBuehler#1441)

* Small things

* Update deps

* Update deps

* Update breaking changes

* Doc fixes (EricLBuehler#1442)

* Mention uqff_maker

* Downgrade rustyline 16.0.0 -> 15.0.0 (EricLBuehler#1444)

* Add max_completion_tokens alias for server (EricLBuehler#1451)

* Audio input support (Phi 4 multimodal) (EricLBuehler#1448)

* Deps

* Add conformer

* Nemo loading

* Position embeds

* Load t5 attn bias

* Attn and feed forward

* Add conv module and glu pointwise

* Implement relative attn bias

* Add the forward methods

* Add encoder embedding

* Fix oproj

* Some loading

* Conformer loads!

* Fully loading speech stack

* Merger

* Dont need that

* First pass at audio processing

* Read samples

* Optional

* Small loading fix

* Runs but not correct yet

* Improved audio processing?

* Works with this

* Fix t5 attn bias

* It works!

* Comment

* Use some other crates

* Clippy

* Allow bf16 on metal

* Add prefix_audio

* Remove unused

* Typo

* User specified

* Add audio url parsing

* AudioProjectionMode -> InputMode

* Audio prefix caching

* Fix bug in audio prefix caching

* Support both at the same time!

* Tweak logging

* Support stereo

* Add mistralrs-audio

* Support batching

* Add server and rust api example

* Add python api

* Fix add_multimodal_message

* Fix unfold for conformer

* Streaming example

* Add web chat support

* Add modalities registry

* Fix offline cache issue for gguf models (EricLBuehler#1452)

* Add MCP server endpoints (EricLBuehler#1453)

* feat(server): add MCP server support

* Add mcp docs

* Add handle_list_tools_request

* Better launch, tool handling

* Tmp state

* Ok works

* Handle modalities

* Update docs

* Add ping

* Tweak temperature bounds, args

* MCP documentation pass (EricLBuehler#1455)

* Fix table

* Update mcp docs

* Improve readme header

* Improve readme header

* Integrate an MCP client (EricLBuehler#1456)

* Add builtin mcp client

* Use async loader

* Add headers

* Handle sse

* More flexible search request

* Add tool callbacks with tools, for mcp

* Add bearer token support

* Add websocket support

* Update docs

* Add python api

* Clippy

* Add http api, docs

* Tests pass

* Make these configs actually work

* Add docs

* Make mistralrs-mcp

* Refactor examples

* Update examples

* Add defaults

* Add defaults

* Add defaults

* Update docs

* Improved docs

* Add -y to npx usages

* Even better examples

* Update generate_wheels

* Update generate_wheels

* Update generate_wheels

* Fix Dockerfile.cuda-all

* Improve automatic tool call (EricLBuehler#1460)

* Improved auto tool call

* Add logging

* chore: `Dockerfile.cuda-all` configurable threads (EricLBuehler#1458)

* chore: `Dockerfile.cuda-all` - Merge `RUN` for `apt-get install` (EricLBuehler#1459)

* Add fallback definition for isnan (EricLBuehler#1463)

* chore: `Dockerfile` - Drop runtime rayon thread ENV (EricLBuehler#1465)

* chore: Dockerfile - Remove rayon threads env

* chore: Dockerfile - Improve formatting for `apt-get`

* Remove duplicate calls for api_dir_list (EricLBuehler#1474)

* Remove duplicate calls for api_dir_list

* Support local cache for api_dir_list

* Fix home folder for metal

* Capitalized

* Fix transient pyo3 dep (EricLBuehler#1478)

Co-authored-by: Eric Buehler <[email protected]>

* Fix objc dep with non macos (EricLBuehler#1480)

* Fix phi 3/4 + nccl issue (EricLBuehler#1481)

* Fix log

* Fix n kv heads

* Fix phi3.5 moe (EricLBuehler#1482)

* Fix phi3.5 moe accum device

* Fix again

* Fix again

* Support GLM4 model! (EricLBuehler#1437)

* Support GLM4 model

* Mention GLM4 model in ReadMe

* glm4 type hint

* Typo fix

* Fix unsupported chat_template function

* Clippy fix

* Refactor distributed backend (EricLBuehler#1484)

* Refactor distributed backend, check power of 2

* Fix compilation

* Cap metal paged attn kv allocation (EricLBuehler#1485)

* Better paged attn metal cap (EricLBuehler#1486)

* Better paged attn metal cap

* Small fix

* Comment

* Small fix

* Refactor

* Server core: consolidate and unify route handlers and API surface (EricLBuehler#1423)

* Start working on consolidating completion and chat_completion underlying implementations

* Move response channel to util mod for now (since it's used with streaming and non streaming)

* More work on consolidating completions and chat completions

* More WIP consolidation of server core handlers

* More WIP consolidation of server core handlers

* More WIP consolidation of server core handlers

* Update docs and restrict completion core visibility

* CodeRabbit feedback: remove logprobs warn from route handler since parse request also checks this

* Use consistent var name for completions mod

* Make route handler modules public API consistent (same fn names, etc.) and provide proxy fn that wrap core fns so core mod doesn't have to be pub
Make lib.rs example compile checked and update example

* Code formatting

* Typo

* Sync fork

* Sync fork

* Docs example fix

* Support qwen3 gguf (EricLBuehler#1488)

* Add qwen3 gguf

* Template fixup

* Make bos/eos token IDs optional (EricLBuehler#1493)

* Remove python deps from CUDA dockerfiles (EricLBuehler#1487)

* Handle noncontiguous v in naive_sdpa (EricLBuehler#1499)

Co-authored-by: Eric Buehler <[email protected]>

* Server Core: refactor Paged Attention configuration (EricLBuehler#1500)

* Use StorageModePrivate for Metal PA kv cache (EricLBuehler#1506)

* Fix OpenAI stream: emit field in tool-call deltas for schema compliance (EricLBuehler#1507)

* FP8 KV-cache quantization for PagedAttention (EricLBuehler#1400)

* Add most of paged attn kv quant

* It builds a bit

* All the functionality at least

* Small fix

* Add a scale

* Fix bf16 usage

* Make k_v_scale optional

* Collector

* Tweak collection

* Refactor

* Add to apis

* Add cuda impl

* Fix compilation

* Fixes

* Handle ENABLE_FP8

* Format

* Tweak

* Fix scaled_convert usage

* Fix cache_t size

* Fixed scale collection

* Actual fix

* Fix fp8 for CC<8

* Fix the usual String != &str bit (EricLBuehler#1483)

Co-authored-by: RageLtMan <rageltman [at] sempervictus>

* chore: `Dockerfile` - Drop runtime rayon thread ENV (EricLBuehler#1465)

* chore: Dockerfile - Remove rayon threads env

* chore: Dockerfile - Improve formatting for `apt-get`

* Remove duplicate calls for api_dir_list (EricLBuehler#1474)

* Remove duplicate calls for api_dir_list

* Support local cache for api_dir_list

* Fix home folder for metal

* Capitalized

* Fix transient pyo3 dep (EricLBuehler#1478)

Co-authored-by: Eric Buehler <[email protected]>

* Fix objc dep with non macos (EricLBuehler#1480)

* Fix phi 3/4 + nccl issue (EricLBuehler#1481)

* Fix log

* Fix n kv heads

* Fix phi3.5 moe (EricLBuehler#1482)

* Fix phi3.5 moe accum device

* Fix again

* Fix again

* Support GLM4 model! (EricLBuehler#1437)

* Support GLM4 model

* Mention GLM4 model in ReadMe

* glm4 type hint

* Typo fix

* Fix unsupported chat_template function

* Clippy fix

* Refactor distributed backend (EricLBuehler#1484)

* Refactor distributed backend, check power of 2

* Fix compilation

* Cap metal paged attn kv allocation (EricLBuehler#1485)

* Better paged attn metal cap (EricLBuehler#1486)

* Better paged attn metal cap

* Small fix

* Comment

* Small fix

* Refactor

* Server core: consolidate and unify route handlers and API surface (EricLBuehler#1423)

* Start working on consolidating completion and chat_completion underlying implementations

* Move response channel to util mod for now (since it's used with streaming and non streaming)

* More work on consolidating completions and chat completions

* More WIP consolidation of server core handlers

* More WIP consolidation of server core handlers

* More WIP consolidation of server core handlers

* Update docs and restrict completion core visibility

* CodeRabbit feedback: remove logprobs warn from route handler since parse request also checks this

* Use consistent var name for completions mod

* Make route handler modules public API consistent (same fn names, etc.) and provide proxy fn that wrap core fns so core mod doesn't have to be pub
Make lib.rs example compile checked and update example

* Code formatting

* Typo

* Sync fork

* Sync fork

* Docs example fix

* Support qwen3 gguf (EricLBuehler#1488)

* Add qwen3 gguf

* Template fixup

* Make bos/eos token IDs optional (EricLBuehler#1493)

* Remove python deps from CUDA dockerfiles (EricLBuehler#1487)

* Handle USE_FP8 for cuda

* Fix cuda warn

* Add readme

* Saturating sub in sequence state

---------

Co-authored-by: Eric Buehler <[email protected]>
Co-authored-by: RageLtMan <[email protected]>
Co-authored-by: Brennan Kinney <[email protected]>
Co-authored-by: Guoqing Bao <[email protected]>
Co-authored-by: Matthew Haynes <[email protected]>

* Validate model name in OpenAI API (EricLBuehler#1509)

* Validate model name in openai api

* Add docs, allow 'ignore'

* Updated examples for EricLBuehler#1509

* Fix mcp import in doc string (EricLBuehler#1510)

* Add multi-model support! (EricLBuehler#1512)

* Refactor MistralRs

* Working multi-model!

* Add mutli-model docs initially

* Update mistralrs-pyo3, mistralrs-bench, mistralrs

* Update apis for consistency

* API tweaks

* Logging tweaks

* Add examples, tweak cli

* Clearer pipeline id

* Fix config key semantics

* Format and clippy

* Tweak logging, fix example

* Clippy refactor

* Update examples

* Remove unused multi model docs

* Replace 'ignore' with 'default'

* Update docs

* Add stars label to readme (EricLBuehler#1513)

* Add CLAUDE.md

* Handle base_model.model case in lora (EricLBuehler#1514)

* Add thread_local! for engine-specific const/static (EricLBuehler#1517)

* Fix MCP doc test (EricLBuehler#1511)

* Allow disabling metal precompilation (EricLBuehler#1518)

* Allow disabling metal precompilation

* Simple preprocessor

* Simple docs

---------

Co-authored-by: Eric Buehler <[email protected]>

* Rust 1.88 clippy (EricLBuehler#1522)

* Rust 1.88 clippy

* Format

* Fix cuda warnings (EricLBuehler#1526)

* Avoid panic decoding tokens on error (EricLBuehler#1527)

* Split Marlin and Paged Attention kernels for faster build (EricLBuehler#1525)

* Split Marlin and Paged Attention kernels for faster build

* Typo fix

* chore: update llguidance (EricLBuehler#1535)

* chore: update llguidance

* chore: remove unused import

* Add the SmolLM3 model! (EricLBuehler#1501)

* Add model

* Update loader

* Fix llama config usage

* Docs

* Fix config no_rope_layers

* Fix tie_word_embeddings default

* Add chat template

* Embed the chat templates

* Fix embedding template

* enable_thinking default true

* Update examples

* XML tools for smollm3

* Add smollm3 docs

* Fix openai examples

* Clippy

---------

Co-authored-by: Eric Buehler <[email protected]>

* Add full Gemma 3n support! (EricLBuehler#1519)

* Add initial

* Loading for text model

* Add ple embeddings

* Add altup, laurel block

* Update rmsnorm

* Add mlp

* Update attn norm application

* Currently no kv shared

* Wire it up

* It runs

* Fix bf16

* Fix scaled embd

* Fixes for mean

* tmp

* Attn confirmed

* Fix target_magnitude

* Add shared kv

* Ok it works

* Remove npy

* Fix streaming

* Remove warnings

* Remove paged attn

* Refactor rope

* Add immediate isq

* Add vision & mproj

* Update image processor

* Vision merge runs, not correct

* Remove

* Add mobilenet v5

* Add multimodal vision embedding

* Fix load

* runs

* Fix gamma

* Works but just not vision tower

* It works!!

* Tweak

* Fix warnings

* Move vision tower

* Fix warn

* Update cache manager things

* Refactor

* Add audio model, it loads

* Add audio processing

* It runs at least

* tmp

* A bit better

* Audio works!!!!

* Fused attn in vision

* Clippy

* Update audio runner

* Optimized audio model

* Remove unused things

* Fix inputs processor bug

* Remove comments

* Clippy

* Small optimizations

* Format

* Correctly register modalities

* Add docs

* Update readme

* Runs there

* Fixed padding from Blaizzy/mlx-vlm#410

* Add better checks

* Fix sdpa n_kv_groups

* Vision encoder works!

* Rotate image

* Clippy

* Fix cuda loading

* Updated device mapper

* Fix overflow

* Fix dtype errors

* Refactor image/audio embeddings

* Fix metal

* Fix dtype mismatch

* Audio processing fixes

* Audio processing fixes

* Works

* Audio is good

* Fix boi/eoi too

* Embed the chat templates

* Better embedding accuracy in non f32

* More f32

* Support bf16 on metal

* Add more ISQ

* Fixed device map

* Clippy

* Gemma3n no paged attn

* Fix saturating sub

* Faster rmsnorm

* Use sdpa for vision model

* Fix ple bug

* Fix name

* Fix multiaudio

* Add matformer config loading

* Add docs

* Add support for matformer in auto device mapper

* Update docs

* Typos

* Tweak

* Tweak

* Fix multidevice

* Fix gemma3n text model auto device map

* Fix dims3

* Fix auto devic emap vision

* Non-metal keeps PLE on cpu

* Complete merge

* Vision dtype f16 -> f32

* Fix metal nm device

* Fix uqff

* Typos

* Reference uqff

* Fix tests

* Fix sequence length check (EricLBuehler#1546)

* update candle version (EricLBuehler#1545)

Co-authored-by: AlpineVibrations <[email protected]>

* add ios target to metal deps (EricLBuehler#1548)

---------

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: Eric Buehler <[email protected]>
Co-authored-by: Eric Buehler <[email protected]>
Co-authored-by: edwko <[email protected]>
Co-authored-by: Copilot <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Guoqing Bao <[email protected]>
Co-authored-by: Michał Moskal <[email protected]>
Co-authored-by: Chen Mulong <[email protected]>
Co-authored-by: Steph Wolski <[email protected]>
Co-authored-by: omahs <[email protected]>
Co-authored-by: Viktor Szépe <[email protected]>
Co-authored-by: Matthew Haynes <[email protected]>
Co-authored-by: RageLtMan <[email protected]>
Co-authored-by: Brennan Kinney <[email protected]>
Co-authored-by: Eric Buehler <[email protected]>
Co-authored-by: Sbargaoui <[email protected]>
Co-authored-by: Gaétan Lepage <[email protected]>
Co-authored-by: Ammar Elsabe <[email protected]>
Co-authored-by: luke <[email protected]>
Co-authored-by: AlpineVibrations <[email protected]>
Co-authored-by: Michael Tissen <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants