Skip to content

Conversation

@cli99
Copy link
Contributor

@cli99 cli99 commented Jul 19, 2024

Summary:
https://github.com/neuralmagic/AutoFP8 supports "ignored_layers" in quantization and the saved out "quantization_config" has the information. E.g.,

"quantization_config": {
    "activation_scheme": "dynamic",
    "ignored_layers": [
        "model.layers.0.self_attn.q_proj",
        "model.layers.0.self_attn.k_proj",
        "model.layers.0.self_attn.v_proj",
        "model.layers.0.self_attn.o_proj"
      ],
     "quant_method": "fp8"
 },

does not quantize the self attention module in the first layer.

However vLLM currently does not respect the "ignored_layers" field and applies uniform quantization to all the modules in all layers.
#6515 added non-uniform support for quantization through compressed-tensors. This PR adds the support for "ignored_layers" in Llama models and leverage the prefix params added in #6515.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@cli99 cli99 marked this pull request as ready for review July 19, 2024 19:33
@robertgshaw2-redhat
Copy link
Collaborator

robertgshaw2-redhat commented Jul 19, 2024

Thanks for this, but we should not make this type of logic in llama.py because it will be too difficult to maintian

Instead, let's have Fp8Config.get_quant_method(layer) extend to have the layer_name passed. Then, if the layer_name is in the ignored list we can return UnquantizedLinearMethod() from this function

This will avoid any changes to llama.py

I can whip this up quickly if you want

@cli99
Copy link
Contributor Author

cli99 commented Jul 19, 2024

@robertgshaw2-neuralmagic, if you can add the change to Fp8Config.get_quant_method to take layer_name, that would be great. Thanks.

@cli99
Copy link
Contributor Author

cli99 commented Jul 23, 2024

Suggested implementation here #6657

@cli99 cli99 deleted the fp8-quant-ignore-layers branch July 23, 2024 17:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants