Conversation
There was a problem hiding this comment.
Summary of Changes
Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request begins the integration of Expert Placement Load Balancing (EPLB) into the FusedMoE layer. The core change involves adapting how expert weights are loaded to account for their physical distribution, allowing the system to map logical expert identifiers to their actual physical locations. While this is a partial implementation, it lays the groundwork for more efficient expert management, with initial benchmarks showing a minor speed improvement.
Highlights
- EPLB Integration: Introduced support for Expert Placement Load Balancing (EPLB) by importing
get_global_expert_location_metadataand modifying the expert weight loading process. - Layer ID Tracking: The
FusedMoELayernow stores itslayer_idduring initialization, which is crucial for querying expert location metadata from the EPLB system. - Refactored Expert Weight Loading: The
weight_loadermethod has been refactored to map logical expert IDs to physical expert IDs using the global expert location metadata. A new private method,_weight_loader_physical, was introduced to handle the actual loading of weights for each physical expert, addressing potential code duplication withEPMoE.weight_loader.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces support for EPLB in FusedMoE. The review highlights a critical bug related to weight tensor sizing and indexing when EPLB is active, likely causing accuracy degradation. A suggestion to improve code clarity by avoiding variable shadowing is also included.
| expert_id = self._map_global_expert_id_to_local_expert_id(expert_id) | ||
| if expert_id == -1: |
There was a problem hiding this comment.
There are a couple of issues here:
-
Potential Bug with Weight Tensor Sizing: The weight tensors (
self.w13_weight,self.w2_weight) are created with a size based on the global number of experts. However, when EPLB is active,_map_global_expert_id_to_local_expert_idreturns a local expert index. This local index is then used to access the globally-sized weight tensors. This will result in only the firstnum_experts_per_partitionslots of the weight tensors being populated, leaving the rest uninitialized. To fix this,FusedMoE's weight tensors should likely be sized usingself.num_experts_per_partitionwhen expert parallelism is active, similar to howEPMoEis implemented. This would require changes inUnquantizedFusedMoEMethod.create_weightsandFusedMoE.__init__. -
Variable Shadowing: The parameter
expert_idis reassigned on line 400. It initially represents a physical expert ID and is then updated to be a local expert ID. This makes the code harder to follow. It would be clearer to use a new variable name, e.g.,local_expert_id.
| expert_id = self._map_global_expert_id_to_local_expert_id(expert_id) | |
| if expert_id == -1: | |
| local_expert_id = self._map_global_expert_id_to_local_expert_id(expert_id) | |
| if local_expert_id == -1: |
|
Could you update it after #8397 is merged? We can have EPMoE and FusedMoE share the same code. |
|
@ch-wan sure, indeed this may not be very useful, thus low priority, b/c for small scale P it is quite balanced |
|
I do not have time to work on this in a few days since there are several higher priority optimizations, and I now see your refactor PR series need a change related to this, thus I close this and feel free to work on it (either by a new PR if it is more convenient for you since iirc you use an auto pr chain tool, or reopen and reuse this PR). |
Motivation
speed: 14.1k -> 14.3k tok/s
acc: has bug, now only 92%
since speedup is quite minor (expected for small-scale prefill deployment), I just PR the partial work here as a low priority task
Modifications
Checklist