Skip to content

fix bailing_moe with enable_dp_attention#10860

Merged
zhyncs merged 1 commit intosgl-project:mainfrom
GuoweiWangU:fix_bailing_moe
Sep 24, 2025
Merged

fix bailing_moe with enable_dp_attention#10860
zhyncs merged 1 commit intosgl-project:mainfrom
GuoweiWangU:fix_bailing_moe

Conversation

@GuoweiWangU
Copy link
Copy Markdown
Contributor

@GuoweiWangU GuoweiWangU commented Sep 24, 2025

Motivation

Inference hang with dp attention.
python -m sglang.launch_server --model {path_to_bailing_moe} --tp 4 --dp 2 --enable-dp-attention --enable-dp-lm-head

The core reason is the #5558 , which leads to embedding forward errors.

Modifications

Use enable_tp instead of use_attn_tp_group for VocabParallelEmbedding

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @GuoweiWangU, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical inference hang issue encountered when running the bailing_moe model with distributed attention and LM head enabled. The problem stemmed from incorrect handling of embedding parallelism, leading to forward errors. The fix involves adjusting how tensor parallelism is configured for VocabParallelEmbedding to ensure proper operation in distributed environments, specifically by changing the parameter used for enabling tensor parallelism.

Highlights

  • Fix for Inference Hang: Addresses an inference hang issue occurring in the bailing_moe model when distributed attention and LM head are enabled, which was caused by embedding forward errors.
  • Parameter Update for VocabParallelEmbedding: The VocabParallelEmbedding initialization now uses enable_tp=not is_dp_attention_enabled() instead of use_attn_tp_group=global_server_args_dict["enable_dp_lm_head"] to correctly handle tensor parallelism in distributed environments.
  • Dependency Management: The is_dp_attention_enabled utility function has been imported to support the new logic for determining tensor parallelism, and an unused import ReplicatedLinear was removed.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a hang issue when using DP attention with the Bailing-MoE model. The root cause was an incorrect tensor parallel group being used for all-reduce in the VocabParallelEmbedding layer. The fix correctly disables tensor parallelism for the word_embeddings layer when DP attention is enabled, which is a simple and effective way to resolve the issue. The change also includes a minor cleanup by removing an unused import. The fix appears correct and well-targeted, though it comes with a minor performance trade-off.

quant_config=quant_config,
prefix=add_prefix("word_embeddings", prefix),
use_attn_tp_group=global_server_args_dict["enable_dp_lm_head"],
enable_tp=not is_dp_attention_enabled(),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change disables tensor parallelism for the word_embeddings layer when DP attention is enabled. While this correctly fixes the hang, be aware that it might have a minor performance impact as the embedding layer will be replicated across all TP ranks instead of being sharded.

@strgrb strgrb requested review from strgrb and zhyncs September 24, 2025 08:38
@zhyncs zhyncs merged commit cd641a9 into sgl-project:main Sep 24, 2025
32 of 77 checks passed
HanHan009527 pushed a commit to HanHan009527/sglang that referenced this pull request Oct 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants