Skip to content

Conversation

@zhiyuan1i
Copy link
Contributor

@zhiyuan1i zhiyuan1i commented Oct 28, 2025

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces Kimi Delta Attention (KDA) into vLLM by adding new kernels and modifying existing ones. The changes are extensive and add a significant new feature. I have identified a couple of critical bugs related to incorrect tensor shapes and memory strides in the Triton kernels, which could lead to incorrect outputs. Additionally, there's a performance-related issue in the autotuning configuration of one of the kernels. Addressing these points will be crucial for the correctness and efficiency of the new implementation.

num_stages = 3
num_warps = 1

o = torch.empty_like(k)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The output tensor o is being allocated with the shape of the key tensor k (torch.empty_like(k)). However, the output of an attention operation should have the shape of the value tensor v. The shape of k is [B, T, H, K] while v is [B, T, HV, V], which can be different. This will lead to a shape mismatch and incorrect output. Please allocate o with the shape of v.

Suggested change
o = torch.empty_like(k)
o = torch.empty_like(v)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In KDA models, q, k, v, and o share the same shape, so it's safe to use empty_like(k).

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Signed-off-by: lizhiyuan <[email protected]>
Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking forward to the new model 👍

@youkaichao youkaichao merged commit e88bdd6 into vllm-project:main Oct 28, 2025
5 checks passed
bhagyashrigai pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Oct 29, 2025
ilmarkov pushed a commit to neuralmagic/vllm that referenced this pull request Nov 7, 2025
ZhengHongming888 pushed a commit to ZhengHongming888/vllm that referenced this pull request Nov 8, 2025
rtourgeman pushed a commit to rtourgeman/vllm that referenced this pull request Nov 10, 2025
wangxiyuan pushed a commit to vllm-project/vllm-ascend that referenced this pull request Nov 12, 2025
### What this PR does / why we need it?
adapt vllm-ascend main branch with vllm releases/v0.11.1

fix `forward context not set` in test_vlm.py caused by:
vllm-project/vllm#23207

fix import `cdiv round` failed caused by:
vllm-project/vllm#27188

fix import `init_cached_hf_modules` failed caused by:
vllm-project/vllm#27567

adapt triton kernel `fused_recurrent_gated_delta_rule_fwd_kernel` caused
by: vllm-project/vllm#27654
- remove unused code in sigmoid_gating.py
- `class FusedRecurrentFunction` , `fused_recurrent_gated_delta_rule`,
`fused_recurrent_gated_delta_rule_fwd`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI 


- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

Signed-off-by: 22dimensions <[email protected]>
luolun pushed a commit to luolun/vllm-ascend that referenced this pull request Nov 19, 2025
### What this PR does / why we need it?
adapt vllm-ascend main branch with vllm releases/v0.11.1

fix `forward context not set` in test_vlm.py caused by:
vllm-project/vllm#23207

fix import `cdiv round` failed caused by:
vllm-project/vllm#27188

fix import `init_cached_hf_modules` failed caused by:
vllm-project/vllm#27567

adapt triton kernel `fused_recurrent_gated_delta_rule_fwd_kernel` caused
by: vllm-project/vllm#27654
- remove unused code in sigmoid_gating.py
- `class FusedRecurrentFunction` , `fused_recurrent_gated_delta_rule`,
`fused_recurrent_gated_delta_rule_fwd`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI 


- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

Signed-off-by: 22dimensions <[email protected]>
Signed-off-by: luolun <[email protected]>
hwhaokun pushed a commit to hwhaokun/vllm-ascend that referenced this pull request Nov 19, 2025
### What this PR does / why we need it?
adapt vllm-ascend main branch with vllm releases/v0.11.1

fix `forward context not set` in test_vlm.py caused by:
vllm-project/vllm#23207

fix import `cdiv round` failed caused by:
vllm-project/vllm#27188

fix import `init_cached_hf_modules` failed caused by:
vllm-project/vllm#27567

adapt triton kernel `fused_recurrent_gated_delta_rule_fwd_kernel` caused
by: vllm-project/vllm#27654
- remove unused code in sigmoid_gating.py
- `class FusedRecurrentFunction` , `fused_recurrent_gated_delta_rule`,
`fused_recurrent_gated_delta_rule_fwd`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

Signed-off-by: 22dimensions <[email protected]>
Signed-off-by: hwhaokun <[email protected]>
NSDie pushed a commit to NSDie/vllm-ascend that referenced this pull request Nov 24, 2025
### What this PR does / why we need it?
adapt vllm-ascend main branch with vllm releases/v0.11.1

fix `forward context not set` in test_vlm.py caused by:
vllm-project/vllm#23207

fix import `cdiv round` failed caused by:
vllm-project/vllm#27188

fix import `init_cached_hf_modules` failed caused by:
vllm-project/vllm#27567

adapt triton kernel `fused_recurrent_gated_delta_rule_fwd_kernel` caused
by: vllm-project/vllm#27654
- remove unused code in sigmoid_gating.py
- `class FusedRecurrentFunction` , `fused_recurrent_gated_delta_rule`,
`fused_recurrent_gated_delta_rule_fwd`

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
CI

- vLLM version: v0.11.0
- vLLM main:
vllm-project/vllm@83f478b

Signed-off-by: 22dimensions <[email protected]>
Signed-off-by: nsdie <[email protected]>
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants