Skip to content

[Bug] Fix w4afp8 moe kernel#9392

Merged
zhyncs merged 2 commits intosgl-project:mainfrom
yuhyao:fix/w4a8-moe-kernel
Aug 21, 2025
Merged

[Bug] Fix w4afp8 moe kernel#9392
zhyncs merged 2 commits intosgl-project:mainfrom
yuhyao:fix/w4a8-moe-kernel

Conversation

@yuhyao
Copy link
Copy Markdown
Contributor

@yuhyao yuhyao commented Aug 20, 2025

Motivation

This PR fixes an issue introduced in PR7772.
When running test_int4_fp8_grouped_gemm_multi_experts in sgl-kernel/tests/test_cutlass_w4a8_moe_mm.py with k = 512, n = 1024, the test may occasionally produce incorrect results. (It happens very rarely, but you can reproduce it more easily by setting batch_size = 512 and num_experts = 256.)

This bug also affects the E2E results of DeepSeek-R1 in w4afp8 TP mode (PR8118), where you may observe inconsistent outputs even with the same random seed and prompt.

Modifications

This fix is adapted from NVIDIA CUTLASS v4.0.0, and has also been merged into TensorRT-LLM.

Accuracy Tests

Benchmarking and Profiling

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @yuhyao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request aims to resolve a bug within the w4afp8 Mixture-of-Experts (MoE) kernel by implementing a critical synchronization step for Tensor Memory Accelerator (TMA) descriptors. This ensures proper commit and wait operations for TMA descriptors, which is crucial for correct data transfer and processing in GPU kernels.

Highlights

  • TMA Descriptor Synchronization: Introduced cute::tma_desc_commit_group() and cute::tma_desc_wait_group() calls within the tensormaps_cp_fence_release function in sm90_mma_array_tma_gmma_rs_warpspecialized_mixed_input_.hpp. This change ensures that TMA descriptors are properly committed and waited upon before proceeding, addressing a potential bug in the w4afp8 MoE kernel.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request aims to fix a bug in the MoE kernel by adding TMA descriptor commit and wait operations. This is a good step towards ensuring correctness. However, the current implementation places the collective synchronization call cute::tma_desc_wait_group() inside a conditional block executed by a single thread. This introduces a new critical issue, as it can lead to race conditions or deadlocks. I have provided a suggestion to move the wait call outside the conditional, ensuring all threads in the warp participate in the synchronization as required.

Comment on lines +1491 to +1494
if (cute::elect_one_sync()) {
cute::tma_desc_commit_group();
cute::tma_desc_wait_group();
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The cute::tma_desc_wait_group() function is a collective operation that all threads in the warp must execute to synchronize. Placing it inside the if (cute::elect_one_sync()) block means only the elected thread will perform the wait, while other threads in the warp will proceed without synchronization. This can lead to a race condition where other threads use stale TMA descriptors, or a deadlock. To fix this, the tma_desc_wait_group() call should be moved outside of the conditional block.

    if (cute::elect_one_sync()) {
      cute::tma_desc_commit_group();
    }
    cute::tma_desc_wait_group();

@AniZpZ AniZpZ self-assigned this Aug 20, 2025
@zhyncs zhyncs merged commit de4990a into sgl-project:main Aug 21, 2025
57 of 60 checks passed
MahmoudAshraf97 pushed a commit to MahmoudAshraf97/sglang that referenced this pull request Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants