-
Notifications
You must be signed in to change notification settings - Fork 31.1k
Attempt to fix VLM gradient enabling #41993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[For maintainers] Suggested jobs to run (before merge) run-slow: qwen2_vl |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
| if vision_module is not None: | ||
| for parameter in vision_module.parameters(): | ||
| parameter.requires_grad = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding of peft#2880, the problem is mainly that the entry point of the model doesn't require gradients (not a trainable parameter, just for gradient checkpointing) so that targeting modules after that doesn't work with reentrant gradient checkpointing. Isn't setting all vision parameters to requires_grad=True masking the changes done in enable_input_requires_grad and therefore always true, regardless of what that helper function does? Maybe targeting something that is clearly not an input, something resembling an attention layer for example, is better?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, hmm- followed the implem of idefics2/smolvlm as I remembered they faced this issue at the time. You're right that this isn't necessary, we register twice. The lowest module trick should work though, and I'm not sure targeting an attention layer works either. Currently @BenjaminBossan 's script outputs grad norms properly with gradient checkpointing enabled and PEFT disabled on this branch, so it seems to do the trick?
no GC
{'loss': 9.4971, 'grad_norm': 23.421083450317383, 'learning_rate': 2e-05, 'epoch': 0.33}
{'loss': 7.9526, 'grad_norm': 675.1868896484375, 'learning_rate': 1.866666666666667e-05, 'epoch': 0.67} with GC
{'loss': 9.4971, 'grad_norm': 23.421083450317383, 'learning_rate': 2e-05, 'epoch': 0.33}
{'loss': 7.9526, 'grad_norm': 675.1868896484375, 'learning_rate': 1.866666666666667e-05, 'epoch': 0.67} in either case, agree double registering is useless, will remove!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think the implementation is fine. I'm just worried that the test is masking the behavior of the fix and is therefore not honest enough. Sorry if I didn't make that clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No that's fair, I'll revamp the test for a narrower scope!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this solution works only for VLMs and also depends a lot on how the vision model is named. I'm sure we listed all possible names, but new models can get creative with it
So I'm thinking that we could potentially make it works ootx for all MLLMS (audio/vision/omni) by checking for each PreTrainedModel within the model and then setting grads on that models' inputs (model.get_input_embeddings())
We use similar trick when setting attention implementations and check for PreTrainedModel's, so it could be a good option. WDYT?
|
Thanks, yes it's a far less brittle option. There's a few (really a few and hopefully should be 0 after v5) modules that were just |
What does this PR do?
As per title. Linked to huggingface/peft#2880.
Follows more or less closely the already existing implementations for idefics2-3 and smolvlm, trying to cover several types of VLMs (they are named differently across the lib.)