-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
[Bugfix]: Fix Possible Output Corruption in Cascade Attention Caused by Non-Contiguous LSE Tensor #22003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively resolves a critical output corruption bug in Cascade Attention by ensuring Log-Sum-Exp (LSE) tensors are contiguous. The core logic change is sound. The new test script for reproducing the bug is a great addition, and I've provided some feedback to improve its robustness and prevent potential crashes or hangs.
7eec070 to
f7e4b26
Compare
…n Cascade Attention Signed-off-by: griii <[email protected]>
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
|
I have a similar corrupted model outputs issue when I run qwen 2.5 14B and qwen 3 14B with tensor parallelism = 1 and send dozens of concurrent requests. I got this with certain finetunes and quants but I have a strong suspicion it extends to the original models too. The corruption is subtler than what's on the screenshot but easy to catch in evals. This problem disappears when I only send 1 request at a time. As I discovered today, it also disappears when I disable cascade attention. |
|
Applying the patch locally didn't solve the issue; only disabling cascade attention does. Possibly I have the same issue as here #22103 - I'm also running on A100. |
I'm glad to see someone paying attention to this issue. I previously conducted many experiments and discovered some errors, thinking it was a niche problem that no one cared about :). Specifically, since the vLLM engine decides whether to enable Cascade Attention at each inference scheduling step, sending a single request does not activate Cascade Attention. In my previous investigation, I found that temporarily switching the backend from FA2 to FA3 could solve this issue. This seems to be triggered by the FA2 operator, but I'm not sure if this solution is effective in the current version. I am trying to solve this problem further and hope to achieve good results. |
|
#17652 is another related issue. |
Maybe try vllm-project/flash-attention#87 |
|
This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you! |
Purpose
Fix an output corruption issue when using Cascade Attention. The
flash_attn_varlen_funcoperator with flash-attn2 may return a non-contiguous LSE tensor (especially suffix_lse) in some cases. Passing a non-contiguous LSE tensor tomerge_attn_statescan cause incorrect outputs. This PR fixes the issue by making sure the LSE tensor is contiguous before further processing.Test Plan
This issue can be consistently reproduced by serving the Qwen2.5-32B-Instruct model (or any large model ≥32B) with tensor parallelism (TP)=8. The problem appears to happen more frequently as TP increases, since the flash_attn_varlen_func operator in flash-attn2 is more likely to return non-contiguous LSE tensors in these cases.
To further elaborate, first launch the model using
vllm serve, for example:Next, use the following script to simulate concurrent requests to the server. Save the responses and analyze the outputs.
Without the fix, will observe a large amount of garbled or incoherent output in the results.
Note: This script sends 128 identical requests concurrently, which is an extreme example. However, similar issues can also occur in practical scenarios where each request shares a long, identical system prompt or few-shot template, but the user’s question that follows is different in each request.
Test Result
Below are partial results of simulated requests before and after the fix.


Before the fix:
After the fix: