Skip to content

[fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization#10299

Merged
zhyncs merged 2 commits intomainfrom
bbuf_tmp
Sep 11, 2025
Merged

[fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization#10299
zhyncs merged 2 commits intomainfrom
bbuf_tmp

Conversation

@BBuf
Copy link
Copy Markdown
Collaborator

@BBuf BBuf commented Sep 11, 2025

Background

The bug is caused by #8118 cc @chenxijun1029

The current code in fused_moe_triton/layer.py has a logical operator precedence issue in the condition check for input scales validation (lines 615-620). The problematic condition was:

if (
    "compressed" in self.quant_method.__class__.__name__.lower()
    or "w4afp8" in self.quant_config.get_name()
    and (param.data[expert_id] != 1).any()
    and ((param.data[expert_id] - loaded_weight).abs() > 1e-5).any()
):

Due to operator precedence (and has higher precedence than or), this condition is actually parsed as:

if (
    "compressed" in self.quant_method.__class__.__name__.lower()
    or (
        "w4afp8" in self.quant_config.get_name()
        and (param.data[expert_id] != 1).any()
        and ((param.data[expert_id] - loaded_weight).abs() > 1e-5).any()
    )
):

This means that any model using compressed tensors quantization will unconditionally trigger the ValueError, regardless of whether the input scales are actually equal or not.

Error Observed

When loading models with compressed tensors quantization (e.g., neuralmagic/Mixtral-8x7B-Instruct-v0.1-FP8), the following error occurs:

ValueError: input_scales of w1 and w3 of a layer must be equal. But got 1.0 vs. tensor([0.1011], device='cuda:1', dtype=torch.bfloat16)

Solution

Add parentheses to ensure correct operator precedence:

if (
    ("compressed" in self.quant_method.__class__.__name__.lower()
    or "w4afp8" in self.quant_config.get_name())
    and (param.data[expert_id] != 1).any()
    and ((param.data[expert_id] - loaded_weight).abs() > 1e-5).any()
):

Now the condition correctly checks:

  • The quantization method is either compressed OR w4afp8
  • AND the parameter data is not equal to 1
  • AND the difference between existing and loaded weights exceeds the threshold

This ensures the validation only runs when appropriate and doesn't incorrectly fail for valid compressed tensor models.

Testing

This fix resolves the model loading failure for compressed tensor quantized models like neuralmagic/Mixtral-8x7B-Instruct-v0.1-FP8 in the nightly GSM8K evaluation tests.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @BBuf, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request resolves a critical bug that prevented the successful loading of models utilizing compressed tensor quantization. The fix involves a precise adjustment to a conditional statement, ensuring that input scale validation is applied only when genuinely necessary, thereby restoring proper functionality for these quantized models.

Highlights

  • Operator Precedence Correction: Addressed a bug in fused_moe_triton/layer.py where an if condition had incorrect logical operator precedence, leading to erroneous validation failures for compressed tensor quantized models. Parentheses were added to ensure the condition ("compressed" OR "w4afp8") AND ... is evaluated as intended.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@BBuf BBuf changed the title Fix operator precedence bug in MoE input scale validation for compres… [fix CI] Fix operator precedence bug in MoE input scale validation for compres… Sep 11, 2025
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes an operator precedence bug in the MoE input scale validation logic by adding parentheses to group the or condition. This prevents an incorrect ValueError from being raised for compressed tensor models. The change is correct and well-explained. I have one suggestion to improve the readability and robustness of the condition.

@BBuf BBuf changed the title [fix CI] Fix operator precedence bug in MoE input scale validation for compres… [fix CI] Fix operator precedence bug in MoE input scale validation for compressor tensor models Sep 11, 2025
@BBuf BBuf changed the title [fix CI] Fix operator precedence bug in MoE input scale validation for compressor tensor models [fix CI] Fix logical condition in fused MoE layer for compressed tensor quantization Sep 11, 2025
@zhyncs zhyncs merged commit 37367da into main Sep 11, 2025
37 of 110 checks passed
@zhyncs zhyncs deleted the bbuf_tmp branch September 11, 2025 06:54
@zhyncs
Copy link
Copy Markdown
Collaborator

zhyncs commented Sep 11, 2025

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants