Skip to content

[RL] MoE BF16 support m_grouped_bf16_gemm_nn_contiguous in EP#7527

Merged
ckl117 merged 1 commit intoPaddlePaddle:developfrom
ckl117:dev_bf16_deepgemm
Apr 21, 2026
Merged

[RL] MoE BF16 support m_grouped_bf16_gemm_nn_contiguous in EP#7527
ckl117 merged 1 commit intoPaddlePaddle:developfrom
ckl117:dev_bf16_deepgemm

Conversation

@ckl117
Copy link
Copy Markdown
Collaborator

@ckl117 ckl117 commented Apr 21, 2026

Motivation

BF16-MoE在EP prefill使用m_grouped_bf16_gemm_nn_contiguous对齐训练精度。
依赖env:

FD_ATTENTION_BACKEND: "FLASH_ATTN"
FD_USE_PHI_MOE_TOPK: 1
FD_USE_PHI_MOE_PERMUTE: 1
FD_SiluAndMul_USE_PHI_SWIGLU: 1
FD_MOE_PROB_IN_ADVANCE: 1
FD_USE_PHI_RMSNORM: 1
FD_ENABLE_RL: 1

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

Modifications

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link
Copy Markdown

paddle-bot Bot commented Apr 21, 2026

Thanks for your contribution!

Copy link
Copy Markdown

@PaddlePaddle-bot PaddlePaddle-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

� AI Code Review | 2026-04-21 16:12:39

� Review ��

PR �述�� Cutlass MoE �端中�� m_grouped_bf16_gemm_nn_contiguous��� DeepGemm��代 paddle.incubate.nn.functional.batched_gemm�并�� fallback 路��
�����model_executor/layers/moe/fused_moe_cutlass_backend.py
影�� Tag�OP

� PR ����

PR �述中 Motivation / Modifications / Usage or Command ��填章���填��建议补��������修��容���中 "del paddle.batch_gemm" 表述��确��batched_gemm ����为 fallback 被�������

��建议��������

  • [RL] Cutlass MoE å��端使ç�¨ DeepGemm grouped_gemm æ�¿ä»£ batched_gemm

**�述模�**��������

## Motivation
� Cutlass MoE �端� EP prefill 路�中�使� paddlefleet_ops ��� DeepGemm `m_grouped_bf16_gemm_nn_contiguous` �� `paddle.incubate.nn.functional.batched_gemm`�以���好� BF16 GEMM ���� paddlefleet_ops �������� batched_gemm �为 fallback�

## Modifications
1. �� `m_grouped_bf16_gemm_nn_contiguous` ������ DeepGemm ��
2. `moe_permute` �� `return_expert_indices=True` ��以�� `m_indices`
3. 两� `batched_gemm` ���为�件�����使� DeepGemm�fallback � batched_gemm

��

级� �件 �述
� 建议 fused_moe_cutlass_backend.py:56 ����������模���� backend ����
� �� fused_moe_cutlass_backend.py:57 paddle.empty ��������确认��������

���价

��������件������好� fallback �容��主�建议补� PR �述并����������"

)


def m_grouped_bf16_gemm_nn_contiguous(x, y, expert_idx_per_token):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 此函数与 fused_moe_deepgemm_backend.py / fused_moe_blackwell_backend.py 中可能存在的类似封装有重复定义的风险。

建议将该工具函数提取到一个公共模块(如 moe/utils.pymoe/__init__.py)中,避免多处重复定义,方便后续统一维护。



def m_grouped_bf16_gemm_nn_contiguous(x, y, expert_idx_per_token):
out = paddle.empty([x.shape[0], y.shape[-1]], dtype=x.dtype)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❓ 疑问 paddle.empty 创建的输出张量内容未初始化。请确认底层 paddlefleet_ops.deep_gemm.m_grouped_bf16_gemm_nn_contiguous 会完整写入 out 的所有元素,否则在 token 数为 0 或 expert_idx_per_token 为空时可能返回包含未初始化内存的张量。

@ckl117 ckl117 changed the title [RL] add m_grouped_bf16_gemm_nn_contiguous, del paddle.batch_gemm [RL] MoE BF16 support m_grouped_bf16_gemm_nn_contiguous in EP Apr 21, 2026
@codecov-commenter
Copy link
Copy Markdown

Codecov Report

❌ Patch coverage is 36.36364% with 7 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@7707be8). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...l_executor/layers/moe/fused_moe_cutlass_backend.py 36.36% 5 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7527   +/-   ##
==========================================
  Coverage           ?   72.89%           
==========================================
  Files              ?      419           
  Lines              ?    57483           
  Branches           ?     9004           
==========================================
  Hits               ?    41902           
  Misses             ?    12755           
  Partials           ?     2826           
Flag Coverage Δ
GPU 72.89% <36.36%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Collaborator

@zoooo0820 zoooo0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Copy Markdown
Collaborator

@EmmonsCurse EmmonsCurse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM~ Skip coverage check as it mainly relies on tests with paddlefleet.

@ckl117 ckl117 merged commit c618a39 into PaddlePaddle:develop Apr 21, 2026
53 of 57 checks passed
xiaoguoguo626807 pushed a commit to xiaoguoguo626807/FastDeploy that referenced this pull request May 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants