Skip to content

support Qwen3-MoE-w4afp8#9147

Open
zhilingjiang wants to merge 1 commit intosgl-project:mainfrom
zhilingjiang:feat/w4afp8-tp
Open

support Qwen3-MoE-w4afp8#9147
zhilingjiang wants to merge 1 commit intosgl-project:mainfrom
zhilingjiang:feat/w4afp8-tp

Conversation

@zhilingjiang
Copy link
Copy Markdown

@zhilingjiang zhilingjiang commented Aug 13, 2025

Motivation

Follow #8118. Base on #7762.

Modifications

This PR primarily implements the adaptation of SGLang for Qwen3-MoE-w4afp8 quantized models. The key enhancements include:

Support for Qwen3-MoE’s w4afp8-block quantization format: SGLang can now load and run models that have been quantized using the w4afp8-block format.
Support for loading Qwen-MoE static quantization calibration parameters: SGLang is now capable of loading and utilizing the static quantization calibration parameters for Qwen-MoE models, ensuring correct inference behavior after quantization.

Accuracy Tests

image

You can download the Qwen3-30B-A3B-w4afp8-block model here.
https://huggingface.co/zhilingjiang/Qwen3-30B-A3B-w4afp8-block-dynamic

Checklist

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Warning

Gemini encountered an error creating the summary. You can try again by commenting /gemini summary.

@ZhuJiaqi9905
Copy link
Copy Markdown
Contributor

Hi, it is a nice work. However, it seems that there are too many diff in the git commit. Could you please fix it to help us understand your code? I think that in order to support Qwen-235B-w4afp8 in TP mode, we should special handling "weight interleave scales" which should not be 4, and modify the sgl-kernel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants