Skip to content

Conversation

@andrewor14
Copy link
Contributor

@andrewor14 andrewor14 commented Sep 10, 2025

Summary: The existing QAT + LoRA path only applied fake quantization to the original slow path, but the default is the fast path that calls unsloth's fast LoRA primitives. This commit integrates fake quantization into these fast primitives as well, and add unit tests to assert that fake quantization is actually taking place.

Test Plan:

Unit tests:

pytest tests/utils/test_qat.py

End-to-end test: https://gist.github.com/andrewor14/6360dd69b5784c71c46e80c14f53e6b6

Full fine-tuning Llama3.1-8B with and without QAT + LoRA on yahma/alpaca-cleaned for 1 epoch:

  • Batch size = 8 (no grad accum)
  • Learning rate = 2e-4
  • Quantization scheme = int4 weight only (with bf16 activations)

Wikitext perplexity:

  • Baseline = int4 quantized model finetuned without QAT
  • QAT int4 quantized model (with this PR) achieved 33% lower perplexity than the int4 baseline
  • QAT int4 quantized model without this PR was worse than the int4 baseline
==> unsloth_model_lora_baseline_output/lm_eval_float.log <==
|        |       |none  |     0|word_perplexity|↓  |7.5551|±  |   N/A|

==> unsloth_model_lora_baseline_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.7655|±  |   N/A|

==> unsloth_model_lora_qat_int4_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.3548|±  |   N/A|

# TODO: there are bad interactions across tests right now, need to figure out
# how to disable model caching before re-enabling this test
@pytest.mark.parametrize("qat_scheme", ["fp8-int4", "fp8-fp8"])
def _test_full_model_fake_quantize(qat_scheme: bool):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danielhanchen seems like there's some model state being cached across tests, is there an easy way to clear the cache in between the tests or disable the caching somehow?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean the overridden functions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like if I run the lora test by itself, then I get the right peft model, but if I re-enable this full finetuning test and run the whole test file at once, then the lora test no longer gives me a peft model

@andrewor14 andrewor14 force-pushed the fix-qat-lora branch 2 times, most recently from 3b6bcf8 to 0eaad5e Compare September 16, 2025 22:26
**Summary:** The existing QAT + LoRA path only applied fake
quantization to the original slow path, but the default is the
fast path that calls unsloth's fast LoRA primitives. This commit
integrates fake quantization into these fast primitives as well,
and add unit tests to assert that fake quantization is actually
taking place.

**Test Plan:**

Unit tests:
```
pytest tests/utils/test_qat.py
```

End-to-end test: https://gist.github.com/andrewor14/6360dd69b5784c71c46e80c14f53e6b6

Full fine-tuning Llama3.1-8B with and without QAT + LoRA on yahma/alpaca-cleaned for 1 epoch:

- Batch size = 8 (no grad accum)
- Learning rate = 2e-4
- Quantization scheme = int4 weight only (with bf16 activations)

Wikitext perplexity:

- Baseline = int4 quantized model finetuned without QAT
- QAT int4 quantized model (with this PR) achieved 33% lower perplexity than the int4 baseline
- QAT int4 quantized model without this PR was worse than the int4 baseline

```
==> unsloth_model_lora_baseline_output/lm_eval_float.log <==
|        |       |none  |     0|word_perplexity|↓  |7.5551|±  |   N/A|

==> unsloth_model_lora_baseline_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.7655|±  |   N/A|

==> unsloth_model_lora_qat_int4_output/lm_eval_quantized.log <==
|        |       |none  |     0|word_perplexity|↓  |8.3548|±  |   N/A|
```
@danielhanchen
Copy link
Contributor

Nice work!

@danielhanchen danielhanchen merged commit 54d8983 into unslothai:main Sep 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants