feat: Enable TRTLLM-Gen Skip-Softmax attention for MLA#2547
feat: Enable TRTLLM-Gen Skip-Softmax attention for MLA#2547yzh119 merged 5 commits intoflashinfer-ai:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds optional skip-softmax controls ( Changes
Sequence Diagram(s)sequenceDiagram
participant Test as "Test"
participant PythonAPI as "Python API\n(prefill / MLA)"
participant TRTWrapper as "TRTLLM Wrapper\n(paged / ragged)"
participant Launcher as "FMHA Launcher\n(csrc/..._launcher.cu)"
participant Runner as "FMHA Runner / GPU"
Test->>PythonAPI: call with skips_softmax / skip_softmax_threshold_scale_factor
PythonAPI->>TRTWrapper: forward skip_softmax_threshold_scale_factor
TRTWrapper->>Launcher: call trtllm_ragged_attention(..., skip_softmax_threshold_scale_factor, skips_softmax, ...)
Launcher->>Runner: set runner_params.mSkipsSoftmaxWhenPossible / mSkipSoftmaxThresholdScaleFactor
Runner->>Runner: select kernel path (skip softmax or full)
Runner-->>Launcher: results
Launcher-->>TRTWrapper: results
TRTWrapper-->>PythonAPI: results
PythonAPI-->>Test: return outputs
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @DomBrown, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the performance capabilities of the attention mechanisms by integrating and expanding support for TRTLLM-Gen's skip-softmax feature. By introducing a configurable threshold, the system can now intelligently bypass certain softmax computations, leading to more efficient processing for both MLA and DeepSeek attention models. This change provides users with greater control over performance-accuracy trade-offs in their attention operations. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flashinfer/mla.py (1)
521-698:⚠️ Potential issue | 🟡 MinorValidate skip_softmax_threshold_scale_factor for non‑TRTLLM‑GEN backends.
When
backendresolves to"xqa", the new parameter is silently ignored. Consider rejecting non‑None values unlessbackend == "trtllm-gen"to avoid misleading callers.🛡️ Proposed guard
if backend == "xqa": + if skip_softmax_threshold_scale_factor is not None: + raise ValueError( + "skip_softmax_threshold_scale_factor is only supported for trtllm-gen backend" + ) if ( get_compute_capability(query.device)[0] != 12 or query.dtype != torch.float8_e4m3fn or kv_cache.dtype != torch.float8_e4m3fn ):
There was a problem hiding this comment.
Code Review
This pull request introduces support for skip-softmax attention for MLA and DeepSeek paths by adding a new skip_softmax_threshold_scale_factor parameter. The changes are well-structured, propagating the new parameter through both the Python and C++ layers correctly. The tests have also been updated to validate the new functionality by checking that a zero threshold yields results consistent with the standard attention mechanism. My main feedback is to correct an invalid link in the docstrings that references a non-existent paper.
|
/bot run |
|
[FAILED] Pipeline #43909043: 15/20 passed |
📌 Description
This PR is a follow-up to #2477, expanding support to MLA.
It also modifies the runner slightly to 'short-circuit' to normal attention kernels if threshold is zero, to reduce overhead. Tests updated to use a very tiny threshold instead, so we still get the same result as normal attention without triggering the fallback.
🔍 Related Issues
#2306
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
New Features
Documentation
Chores
Tests