Skip to content

Support EPLB in FusedMoE#8398

Closed
fzyzcjy wants to merge 1 commit intosgl-project:mainfrom
fzyzcjy:feat/fused_moe_eplb
Closed

Support EPLB in FusedMoE#8398
fzyzcjy wants to merge 1 commit intosgl-project:mainfrom
fzyzcjy:feat/fused_moe_eplb

Conversation

@fzyzcjy
Copy link
Copy Markdown
Collaborator

@fzyzcjy fzyzcjy commented Jul 27, 2025

Motivation

speed: 14.1k -> 14.3k tok/s
acc: has bug, now only 92%

since speedup is quite minor (expected for small-scale prefill deployment), I just PR the partial work here as a low priority task

Modifications

Checklist

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request begins the integration of Expert Placement Load Balancing (EPLB) into the FusedMoE layer. The core change involves adapting how expert weights are loaded to account for their physical distribution, allowing the system to map logical expert identifiers to their actual physical locations. While this is a partial implementation, it lays the groundwork for more efficient expert management, with initial benchmarks showing a minor speed improvement.

Highlights

  • EPLB Integration: Introduced support for Expert Placement Load Balancing (EPLB) by importing get_global_expert_location_metadata and modifying the expert weight loading process.
  • Layer ID Tracking: The FusedMoELayer now stores its layer_id during initialization, which is crucial for querying expert location metadata from the EPLB system.
  • Refactored Expert Weight Loading: The weight_loader method has been refactored to map logical expert IDs to physical expert IDs using the global expert location metadata. A new private method, _weight_loader_physical, was introduced to handle the actual loading of weights for each physical expert, addressing potential code duplication with EPMoE.weight_loader.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for EPLB in FusedMoE. The review highlights a critical bug related to weight tensor sizing and indexing when EPLB is active, likely causing accuracy degradation. A suggestion to improve code clarity by avoiding variable shadowing is also included.

Comment on lines 400 to 401
expert_id = self._map_global_expert_id_to_local_expert_id(expert_id)
if expert_id == -1:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There are a couple of issues here:

  1. Potential Bug with Weight Tensor Sizing: The weight tensors (self.w13_weight, self.w2_weight) are created with a size based on the global number of experts. However, when EPLB is active, _map_global_expert_id_to_local_expert_id returns a local expert index. This local index is then used to access the globally-sized weight tensors. This will result in only the first num_experts_per_partition slots of the weight tensors being populated, leaving the rest uninitialized. To fix this, FusedMoE's weight tensors should likely be sized using self.num_experts_per_partition when expert parallelism is active, similar to how EPMoE is implemented. This would require changes in UnquantizedFusedMoEMethod.create_weights and FusedMoE.__init__.

  2. Variable Shadowing: The parameter expert_id is reassigned on line 400. It initially represents a physical expert ID and is then updated to be a local expert ID. This makes the code harder to follow. It would be clearer to use a new variable name, e.g., local_expert_id.

Suggested change
expert_id = self._map_global_expert_id_to_local_expert_id(expert_id)
if expert_id == -1:
local_expert_id = self._map_global_expert_id_to_local_expert_id(expert_id)
if local_expert_id == -1:

@ch-wan
Copy link
Copy Markdown
Collaborator

ch-wan commented Jul 27, 2025

Could you update it after #8397 is merged? We can have EPMoE and FusedMoE share the same code.

@fzyzcjy
Copy link
Copy Markdown
Collaborator Author

fzyzcjy commented Jul 27, 2025

@ch-wan sure, indeed this may not be very useful, thus low priority, b/c for small scale P it is quite balanced

@fzyzcjy
Copy link
Copy Markdown
Collaborator Author

fzyzcjy commented Jul 27, 2025

I do not have time to work on this in a few days since there are several higher priority optimizations, and I now see your refactor PR series need a change related to this, thus I close this and feel free to work on it (either by a new PR if it is more convenient for you since iirc you use an auto pr chain tool, or reopen and reuse this PR).

@fzyzcjy fzyzcjy closed this Jul 27, 2025
@ch-wan ch-wan mentioned this pull request Jul 28, 2025
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants