-
Notifications
You must be signed in to change notification settings - Fork 2.1k
FEAT add GraLoRA #2851
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
FEAT add GraLoRA #2851
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for contributing GraLoRA to PEFT. The method looks interesting and the implementation generally looks good.
I have added a bunch of comments, but many of these are just due to your fork being a bit older. We have simplified PEFT now so that you can remove a bunch of code, I have marked the code that can be deleted.
Apart from the comments that I added, to complete this PR, let's work on:
- Extend tests: Add tests to
test_custom_models.py,test_encoder_decoder_models.py,test_feature_extraction_models.py, andtest_seq_classifier.py - Also, let's add documentation and ideally also at least one example.
- Optional, but highly recommended: Add an experiment to our PEFT method comparison suite.
134e6f0 to
a24d156
Compare
|
@yeonjoon-jung01 Please ping me when you're finished so that I know that I can give this another review. Also, if possible, please avoid force pushes or rebases, as those make reviews harder. |
…hts parameter for flexible initialization
…ight calculation.
c53ffce to
2618a8a
Compare
…ce, and more intuitive hybrid_r handling.
2618a8a to
dec25f5
Compare
@BenjaminBossan I’ve finished updating the code 🙂. I saw your message a bit late — I had already rebased the branch to sync with the main stream, just in case there might be any conflicts. I’ll make sure to avoid force pushes or rebases from now on. Sorry about that! |
|
@BenjaminBossan I have also resolved the previously missed features. I’ve extended the test coverage to include |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates to the PR. I did another review round, please check.
Also, before committing your changes, please call make style. Ensure that you have the correct version of ruff installed (0.12.12).
|
@BenjaminBossan I’ve resolved all of your comments and applied the suggested changes. The main update is that I removed tests/test_gralora.py and integrated the related test cases into the existing test_initialization and test_custom_models files, including additional scenarios for Hybrid GraLoRA. |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
@yeonjoon-jung01 Could you please run |
|
@BenjaminBossan I have run |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates. A test is failing. Check my comment on possible solutions.
|
@BenjaminBossan Do you think there’s any additional code I should test or update? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates, the change looks good.
I focused on the examples this time and found a few issues. Some are possibly due to some recent changes in transformers, not sure, but we should update them so that the examples run out of the box.
Moreover, I ran an experiment with GraLoRA on the PEFT MetaMath benchmark. I used the default settings for the config and in general, the results compare to LoRA rank 32, with similar memory usage and training time. However, the final test accuracy fell slightly short, attaining 46.2% compared to 48.2% with LoRA. If you have any suggestion for better GraLoRA hyper-parameters for this experiment, feel free to check them in as an experiment. Otherwise, we can also work the defaults.
@BenjaminBossan Could you please try with learning rate 2e-4 instead of the default 1e-4 for GraLoRA? |
@BenjaminBossan I’ve tested the method on my side with both rank 32 and 64, and in both cases, it achieved higher performance than LoRA. If you’d like, I can also commit the configuration and result JSON files, or you’re welcome to try reproducing it on your end. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please try with learning rate 2e-4 instead of the default 1e-4 for GraLoRA?
This does indeed help. With rank 32, I get higher accuracy now, 48.6% compared to LoRA rank 32 getting 48.2%. For rank 64 (alpha 128), I get 52.7% with GraLoRA and 53.0% with LoRA. In both cases, the memory usage is very close between the two methods.
We can check in those experiments, but if you have other suggestions that may work better, e.g. different hybrid_r or gralora_k, LMK.
…e unsupported features
@BenjaminBossan I also tried using a learning rate of 2e-4 for rank 32 and alpha 64, which achieved a test accuracy of 50.7%. The experiment was run on a single NVIDIA RTX A6000. I believe the difference may have resulted from variations in hardware or library versions. I’m just curious if there might be any other differing settings. I’ve attached the following as adapter_config.json. with following for the training_params.json the result and package info is as follow |
|
My {
"auto_mapping": null,
"base_model_name_or_path": null,
"bias": "none",
"fan_in_fan_out": false,
"gralora_alpha": 64,
"gralora_dropout": 0.0,
"gralora_k": 2,
"hybrid_r": 0,
"inference_mode": false,
"init_weights": true,
"layers_pattern": null,
"layers_to_transform": null,
"modules_to_save": null,
"peft_type": "GRALORA",
"peft_version": "0.17.2.dev0@UNKNOWN",
"r": 32,
"revision": null,
"target_modules": null,
"task_type": null
}Training params are the same.
The same accuracy for both ranks?
Possibly, just as an example, I use torch 2.8.0. The final run will be on an AWS instance we use for all models, so the score may differ yet from what I reported, as I ran the experiment locally. |
@BenjaminBossan For rank 64, the accuracy ranged between 52.5 and 53%, depending on the hardware (A6000 and H100), which aligns with your reported results. I haven't tested yet, but I think we could consider setting Additionally, I believe the accuracy for rank 64 LoRA is documented as 48.9% in the Finally, I think a batch size of 4 might be too small for stable training. How about adding an accumulation step option and increasing the effective batch size to a more common value, such as 128 or 192? |
|
@BenjaminBossan I guess you could add the GraLoRA rank-32 example with a learning rate of 2e-4 for now. I believe the accuracy results vary significantly across different settings (hardware and library versions) due to the instability caused by the small batch size. If you’re planning to scale up the batch size, I might try other configurations then. Otherwise, please let me know if there’s anything else I should take care of. |
Could you please push the experiments to this PR (only the configs, not the results)? Since the learning rate also needs a different value, please include the
I compared to LoRA with rank 64 and rslora enabled for better alpha values:
We could think about adding gradient accumulation to the script, but we wanted to keep it simple on purpose, and gradient accumulation can be tricky to get right. For this PR, let's keep the setting as they are. Since the other methods use the same batch size, I think the comparison is still fair. |
|
@BenjaminBossan I was just wondering if increasing the batch size could help stabilize the training process and make the final results more consistent across different settings (hardware and library versions). However, I also agree that it’s fine to keep the current settings as they are in this PR. I’ve added the experiment configs for GraLoRA :) |
|
@yeonjoon-jung01 Could you please run |
@BenjaminBossan I have updated the code 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @yeonjoon-jung01 for the last update and for your great work on this PR overall. It's now in a finished state as everything LGTM.
We will not merge it right now as PEFT is currently in feature freeze. As soon as the next release is out, which shouldn't be too long in the future, this PR will be merged.
@BenjaminBossan Thanks a lot for your guidance during this PR! Really appreciate the helpful feedbacks. Looking forward to it being merged after the next release. |
Summary
We opened the initial PR for the GraLoRA method (a granular low-rank adaptation that improves expression power and outlier handling, selected as a NeurIPS 2025 Spotlight), based on #2636