Skip to content

Conversation

@Qubitium
Copy link
Contributor

@Qubitium Qubitium commented Oct 14, 2025

Remove autogptq clutter and autogptq related configs that are not worth adding backward compat.

GPTQModel has a slight project name change (pypi package and import name stays the same) to GPT-QModel with - as we now have added awq/AutoAWQ into our repo and will be making pr soon to address awq loading using GPT-QModel.

GPTQConfig has the most important changes in this PR:

# New GPTQConfig Property. Applicable for sister Peft/Optimum PRs
act_group_aware (`bool`, *optional*, defaults to `True`):
    Use GAR (group aware activation order) during quantization. Has measurable positive impact on quantization
    quality. Only applicable when `desc_act = False`. Will forced to be `False` when `desc_act = True`.
    
    
# Removed GPTQConfig Properties:
use_cuda_fp16
use_exllama
exllama_config

The 3 removed properties are all related kernel selection. These 3 are a hot potatoe mess and legacy from autogptq. GPT-QModel uses unified backend (existing) property to select kernels. There were compat codes written to convert these 3 properties to backend behind the scenes in 2024 but no longer relevant for 2025.

Note:

  • Transformers/Optimum/Peft CI tests should never check for kernel.QUANT_TYPE (str). GPTQ-QModel will return best performing kernel for the relevant module and it may be different per module due to in/out features and other gptq/module properties in relation to device type + dtype + many factors.
  • CI tests should only assert check for kernel.QUANT_TYPE if the test specifies a specific kernel via backend selection.

@Rocketknight1
Copy link
Member

cc @MekkCyber for quantization

@Qubitium Qubitium changed the title [WIP] Fully deprecate AutoGPTQ for GPT-QModel [WIP] Fully deprecate AutoGPTQ and AutoAWQ for GPT-QModel Nov 20, 2025
@Qubitium
Copy link
Contributor Author

We have begun AutoAWQ deprecation as well.

  • Fused module codes have all been removed. AutoAWQ used to do quant linear level fusing but I do not believe that this is maintainable or good since if SGLang/vLLM adopts Transformers v5 for model loading, they will do their own auto fusing and the quant module should not interfere with this.

  • IPEX is deprecated by Intel and we have a new AwqTorchFused kernel (based on same Intel TorchFused kernel for GPTQ) so any code/unit tests for IPEX now will point to AwqTorchFused kernel.

@MekkCyber
Copy link
Contributor

Hi @Qubitium ! Thanks a lot for working on this! Quick question, what do you mean by AutoAWQ being part of GPT-QModel now? Did you integrate the entire library (including the transformers dependency, like AutoAWQ does), or did you just port over the linear layers, kernels, and related components?

@Qubitium Qubitium marked this pull request as ready for review December 2, 2025 09:11
@Qubitium
Copy link
Contributor Author

Qubitium commented Dec 2, 2025

@SunMarc @MekkCyber PR is now synced to Peft/Optimum pending Prs. Ready for code review for this portion. All tests passing with pending gpt-qmodel 5.4.4 release (later today).

Notable changes:

  1. hf_select_quant_linear_v2 will now auto select kernel for both gptq/autoawq. No more kernel selection crud in transformers and gptq/awq kernel selection merged into single api strictly used for hf for future api stability. Let gpt-qmodel decide as it has the best view to return the best/latest kernel.

  2. AutoAWQ fusing codes have been removed. This code is not maintainable (static map based, model arch specific) and is not relevant for vllm/sglang as they do their own fusing. Tranformer v5 I believe is also introducing more generic fusing so any manual, per model arch, fusing done by previous autoawq code should be eliminated.

  3. AwqConfig now inherits from GPTQConfig due to shared properties. For GPTQ, legacy checkpoint_format is remapped to format internally but for backward compat, until future deprecation, we also write to checkpoint_format on save via to_dict. For AWQ, version is now mapped to format internally, and likewise for compat, we write to version using format value in to_dict. This is consistent with what gpt-qmodel does for code clarity while maintaining backward compat.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, left some minor comments !

Comment on lines 813 to 818
do_fuse (`bool`, *optional*, defaults to `False`):
Whether to fuse attention and mlp layers together for faster inference
Deprecated, Whether to fuse attention and mlp layers together for faster inference
fuse_max_seq_len (`int`, *optional*):
The Maximum sequence length to generate when using fusing.
Deprecated, The Maximum sequence length to generate when using fusing.
modules_to_fuse (`dict`, *optional*, default to `None`):
Overwrite the natively supported fusing scheme with the one specified by the users.
Deprecated, Overwrite the natively supported fusing scheme with the one specified by the users.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove it directly since those are not used

@SunMarc
Copy link
Member

SunMarc commented Dec 3, 2025

@bot /style

@github-actions
Copy link
Contributor

github-actions bot commented Dec 3, 2025

Style bot fixed some files and pushed the changes.

quantization_config = AwqConfig(backend=AwqBackend.GEMM)
cls.quantized_model = AutoModelForCausalLM.from_pretrained(
cls.model_name, device_map=cls.device_map, quantization_config=quantization_config
)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SunMarc This part of the awq test was modified since previous test assumes kernel loading and weight save is 1-to-1 compatible. It is not as exllama/marlin kernels mutates the weights on load so they are in-effect, not packable/re-savable. To make the test pass, which expect quantized model to load, and then save, and thenreload/inference again, we need to specify the gemm kernel which does not mutates weights.

@Qubitium
Copy link
Contributor Author

Qubitium commented Dec 4, 2025

@SunMarc Since last review:

  1. Unused awq properties (fuse related) removed.
  2. Fixed commented out code related to IPEX and change it to test TorchFused kernel instead (ipex replacement)
  3. Fixed hf awq kernel selection not passing device_map to hf_select_quant_linear_v2. Without device_map, it was selecting the wrong kernel since gpt-qmodel needs device info to return best kernel for hw.

The PR currently depends on GPT-QModel 5.4.4 which is not yet released as we are working to resolve asap an internal regression related to gptq packing code: ModelCloud/GPTQModel#2234

Signed-off-by: ZX-ModelCloud <[email protected]>
Signed-off-by: ZX-ModelCloud <[email protected]>
Signed-off-by: ZX-ModelCloud <[email protected]>
Signed-off-by: ZX-ModelCloud <[email protected]>
Signed-off-by: ZX-ModelCloud <[email protected]>
@github-actions
Copy link
Contributor

github-actions bot commented Dec 4, 2025

[For maintainers] Suggested jobs to run (before merge)

run-slow: auto, autoawq, gptq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants