Skip to content

Conversation

@griii
Copy link

@griii griii commented Jul 31, 2025

Purpose

Fix an output corruption issue when using Cascade Attention. The flash_attn_varlen_func operator with flash-attn2 may return a non-contiguous LSE tensor (especially suffix_lse) in some cases. Passing a non-contiguous LSE tensor to merge_attn_states can cause incorrect outputs. This PR fixes the issue by making sure the LSE tensor is contiguous before further processing.

Test Plan

This issue can be consistently reproduced by serving the Qwen2.5-32B-Instruct model (or any large model ≥32B) with tensor parallelism (TP)=8. The problem appears to happen more frequently as TP increases, since the flash_attn_varlen_func operator in flash-attn2 is more likely to return non-contiguous LSE tensors in these cases.

To further elaborate, first launch the model using vllm serve, for example:

vllm serve Qwen/Qwen2.5-32B-Instruct -tp 8

Next, use the following script to simulate concurrent requests to the server. Save the responses and analyze the outputs.

# SPDX-License-Identifier: Apache-2.0
# SPDX-FileCopyrightText: Copyright contributors to the vLLM project
import asyncio
import json

import httpx

VLLM_URL = "http://localhost:8000/v1/chat/completions"
CONCURRENCY = 128
REQUEST_ROUNDS = 1

payload = {
    "messages": [{
        "role":
        "user",
        "content":
        "Question: Angelo and Melanie want to plan how many hours over the next week they should study together for their test next week. They have 2 chapters of their textbook to study and 4 worksheets to memorize. They figure out that they should dedicate 3 hours to each chapter of their textbook and 1.5 hours for each worksheet. If they plan to study no more than 4 hours each day, how many days should they plan to study total over the next week if they take a 10-minute break every hour, include 3 10-minute snack breaks each day, and 30 minutes for lunch each day?\nLet's think step by step\nAngelo and Melanie think they should dedicate 3 hours to each of the 2 chapters, 3 hours x 2 chapters = 6 hours total.\nFor the worksheets they plan to dedicate 1.5 hours for each worksheet, 1.5 hours x 4 worksheets = 6 hours total.\nAngelo and Melanie need to start with planning 12 hours to study, at 4 hours a day, 12 / 4 = 3 days.\nHowever, they need to include time for breaks and lunch. Every hour they want to include a 10-minute break, so 12 total hours x 10 minutes = 120 extra minutes for breaks.\nThey also want to include 3 10-minute snack breaks, 3 x 10 minutes = 30 minutes.\nAnd they want to include 30 minutes for lunch each day, so 120 minutes for breaks + 30 minutes for snack breaks + 30 minutes for lunch = 180 minutes, or 180 / 60 minutes per hour = 3 extra hours.\nSo Angelo and Melanie want to plan 12 hours to study + 3 hours of breaks = 15 hours total.\nThey want to study no more than 4 hours each day, 15 hours / 4 hours each day = 3.75\nThey will need to plan to study 4 days to allow for all the time they need.\nThe answer is 4\n\nQuestion: Mark's basketball team scores 25 2 pointers, 8 3 pointers and 10 free throws.  Their opponents score double the 2 pointers but half the 3 pointers and free throws.  What's the total number of points scored by both teams added together?\nLet's think step by step\nMark's team scores 25 2 pointers, meaning they scored 25*2= 50 points in 2 pointers.\nHis team also scores 6 3 pointers, meaning they scored 8*3= 24 points in 3 pointers\nThey scored 10 free throws, and free throws count as one point so they scored 10*1=10 points in free throws.\nAll together his team scored 50+24+10= 84 points\nMark's opponents scored double his team's number of 2 pointers, meaning they scored 50*2=100 points in 2 pointers.\nHis opponents scored half his team's number of 3 pointers, meaning they scored 24/2= 12 points in 3 pointers.\nThey also scored half Mark's team's points in free throws, meaning they scored 10/2=5 points in free throws.\nAll together Mark's opponents scored 100+12+5=117 points\nThe total score for the game is both team's scores added together, so it is 84+117=201 points\nThe answer is 201\n\nQuestion: Bella has two times as many marbles as frisbees. She also has 20 more frisbees than deck cards. If she buys 2/5 times more of each item, what would be the total number of the items she will have if she currently has 60 marbles?\nLet's think step by step\nWhen Bella buys 2/5 times more marbles, she'll have increased the number of marbles by 2/5*60 = 24\nThe total number of marbles she'll have is 60+24 = 84\nIf Bella currently has 60 marbles, and she has two times as many marbles as frisbees, she has 60/2 = 30 frisbees.\nIf Bella buys 2/5 times more frisbees, she'll have 2/5*30 = 12 more frisbees.\nThe total number of frisbees she'll have will increase to 30+12 = 42\nBella also has 20 more frisbees than deck cards, meaning she has 30-20 = 10 deck cards\nIf she buys 2/5 times more deck cards, she'll have 2/5*10 = 4 more deck cards.\nThe total number of deck cards she'll have is 10+4 = 14\nTogether, Bella will have a total of 14+42+84 = 140 items\nThe answer is 140\n\nQuestion: A group of 4 fruit baskets contains 9 apples, 15 oranges, and 14 bananas in the first three baskets and 2 less of each fruit in the fourth basket. How many fruits are there?\nLet's think step by step\nFor the first three baskets, the number of apples and oranges in one basket is 9+15=24\nIn total, together with bananas, the number of fruits in one basket is 24+14=38 for the first three baskets.\nSince there are three baskets each having 38 fruits, there are 3*38=114 fruits in the first three baskets.\nThe number of apples in the fourth basket is 9-2=7\nThere are also 15-2=13 oranges in the fourth basket\nThe combined number of oranges and apples in the fourth basket is 13+7=20\nThe fourth basket also contains 14-2=12 bananas.\nIn total, the fourth basket has 20+12=32 fruits.\nThe four baskets together have 32+114=146 fruits.\nThe answer is 146\n\nQuestion: There are 220 castles in Scotland.  40 percent of them are ruins, and half of the ruined castles are unmanned.  How many unmanned ruined castles are there in Scotland?\nLet's think step by step\nAnswer:"
    }],
    "max_tokens":
    1024,
    "temperature":
    0.0,
    "stream":
    False
}


async def call_vllm(client, idx):
    try:
        response = await client.post(VLLM_URL, json=payload, timeout=600.0)
        data = response.json()
        res = next(iter(data.get('choices', [])),
                   {}).get('message', {}).get('content', '')
        print(f"result: {idx}, res:{res}")
        return res
    except Exception as e:
        print(f"Req {idx:03} error: {e}")
        return None


async def main():
    limits = httpx.Limits(max_connections=CONCURRENCY)
    async with httpx.AsyncClient(limits=limits) as client:
        tasks = []
        for i in range(CONCURRENCY * REQUEST_ROUNDS):
            tasks.append(call_vllm(client, i))
        results = await asyncio.gather(*tasks)
    valid_results = [r for r in results if r is not None]

    import time
    with open(filename := f"result-{time.time()}-{CONCURRENCY}.json",
              "w",
              encoding="utf-8") as f:
        json.dump(valid_results, f, ensure_ascii=False, indent=2)
    print(f"Wrote {len(valid_results)} results to {filename}")


if __name__ == "__main__":
    asyncio.run(main())

Without the fix, will observe a large amount of garbled or incoherent output in the results.
Note: This script sends 128 identical requests concurrently, which is an extreme example. However, similar issues can also occur in practical scenarios where each request shares a long, identical system prompt or few-shot template, but the user’s question that follows is different in each request.

Test Result

Below are partial results of simulated requests before and after the fix.
Before the fix:
image
After the fix:
image

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a critical output corruption bug in Cascade Attention by ensuring Log-Sum-Exp (LSE) tensors are contiguous. The core logic change is sound. The new test script for reproducing the bug is a great addition, and I've provided some feedback to improve its robustness and prevent potential crashes or hangs.

@griii griii force-pushed the main branch 2 times, most recently from 7eec070 to f7e4b26 Compare July 31, 2025 08:57
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@alexlyzhov
Copy link

I have a similar corrupted model outputs issue when I run qwen 2.5 14B and qwen 3 14B with tensor parallelism = 1 and send dozens of concurrent requests. I got this with certain finetunes and quants but I have a strong suspicion it extends to the original models too. The corruption is subtler than what's on the screenshot but easy to catch in evals.

This problem disappears when I only send 1 request at a time. As I discovered today, it also disappears when I disable cascade attention.

@alexlyzhov
Copy link

Applying the patch locally didn't solve the issue; only disabling cascade attention does. Possibly I have the same issue as here #22103 - I'm also running on A100.

@griii
Copy link
Author

griii commented Sep 5, 2025

I have a similar corrupted model outputs issue when I run qwen 2.5 14B and qwen 3 14B with tensor parallelism = 1 and send dozens of concurrent requests. I got this with certain finetunes and quants but I have a strong suspicion it extends to the original models too. The corruption is subtler than what's on the screenshot but easy to catch in evals.

This problem disappears when I only send 1 request at a time. As I discovered today, it also disappears when I disable cascade attention.

I'm glad to see someone paying attention to this issue. I previously conducted many experiments and discovered some errors, thinking it was a niche problem that no one cared about :).

Specifically, since the vLLM engine decides whether to enable Cascade Attention at each inference scheduling step, sending a single request does not activate Cascade Attention.

In my previous investigation, I found that temporarily switching the backend from FA2 to FA3 could solve this issue. This seems to be triggered by the FA2 operator, but I'm not sure if this solution is effective in the current version.

I am trying to solve this problem further and hope to achieve good results.

@alexlyzhov
Copy link

#17652 is another related issue.

@griii
Copy link
Author

griii commented Sep 7, 2025

#17652 is another related issue.

Maybe try vllm-project/flash-attention#87

@github-actions
Copy link

github-actions bot commented Dec 8, 2025

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Dec 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale Over 90 days of inactivity v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants