Skip to content

Support DeepSeek-R1 w4a8 low latency deepep#8311

Closed
ayrnb wants to merge 95 commits intosgl-project:mainfrom
bytedance-iaas:feat/w4a8_support_ll_deepep
Closed

Support DeepSeek-R1 w4a8 low latency deepep#8311
ayrnb wants to merge 95 commits intosgl-project:mainfrom
bytedance-iaas:feat/w4a8_support_ll_deepep

Conversation

@ayrnb
Copy link
Copy Markdown
Collaborator

@ayrnb ayrnb commented Jul 24, 2025

Motivation

Follow #8247 #7762. Support deepep low latency mode for DeepSeek-R1 w4a8 model

Modifications

add forward_cutlass_w4a8_masked for deepep low latency mode

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ayrnb, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces specialized support for DeepSeek-R1 w4a8 quantized models to operate in a low-latency Deep Expert Parallelism (DeepEP) mode. It achieves this by adding a new execution path within the existing Mixture of Experts (MoE) layer, which includes custom data preprocessing and output handling tailored for this specific optimization.

Highlights

  • New Low-Latency DeepEP Mode: I've introduced a new deepep_ll (DeepEP Low Latency) mode within the cutlass_w4a8_moe function. This mode is specifically designed to optimize performance for DeepSeek-R1 w4a8 quantized models.
  • Conditional Logic for DeepEP Modes: The cutlass_w4a8_moe function now includes conditional logic to handle input preparation and output processing differently based on the ep_mode parameter. For deepep_ll mode, it uses a new data preparation pipeline.
  • Specialized Data Preparation for Low Latency: A new function, deepep_ll_get_cutlass_w4a8_moe_mm_data, has been added to kernels.py. This function is responsible for preparing the input hidden_states, expert offsets, and problem sizes in a format suitable for the low-latency DeepEP mode, including a new compute_problem_sizes_w4a8 kernel.
  • Integration into MoE Layer Forward Pass: The main MoE layer's forward method in layer.py has been updated to conditionally invoke a new forward_cutlass_w4a8_masked method when use_w4afp8 is true and the resolved_deepep_mode is set to low_latency. This new method acts as the entry point for the optimized W4A8 path.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for DeepSeek-R1 w4a8 in low latency deepep mode. The changes are well-contained and introduce a new execution path for this specific mode. I've provided a few suggestions to improve performance by vectorizing a loop, and to enhance code quality by removing dead code and a debug log statement. Overall, the implementation looks correct.

Comment on lines +254 to +259
for expert_idx in non_zero_indices:
num_non_zero_rows = local_topk_ids[expert_idx].item()
output[expert_idx, :num_non_zero_rows] = c2[
c2_index : c2_index + num_non_zero_rows
]
c2_index += num_non_zero_rows
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This Python loop iterates over active experts to scatter the results. For performance-critical code running on a GPU, this can be a bottleneck due to the overhead of launching multiple operations from a Python loop. Consider vectorizing this operation or using a custom kernel for a more efficient implementation.

Comment on lines +260 to +261
else:
output = c2
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This else block appears to be unreachable. The ep_mode is validated on lines 124-151, and an unknown ep_mode will raise a ValueError, preventing execution from reaching this point. This makes the else block dead code. Please remove it to improve code clarity and maintainability.

@ayrnb
Copy link
Copy Markdown
Collaborator Author

ayrnb commented Jul 24, 2025

it can not enable cudagraph. 😭😭😭😭

@ayrnb
Copy link
Copy Markdown
Collaborator Author

ayrnb commented Jul 28, 2025

it can not enable cudagraph. 😭😭😭😭

image cudagraph can enable.

@ayrnb
Copy link
Copy Markdown
Collaborator Author

ayrnb commented Jul 28, 2025

The code became too messy after the rebase, so I create a new PR #8464

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.