-
-
Notifications
You must be signed in to change notification settings - Fork 4k
Closed
Description
I continuous pretrained a Qwen2.5 0.5B model. When I load the adapter and merge it and then perform inference, the output looks good. But, when I merge with adapter and save model in 16bit or 4bit using the unsloth saving strategy, and then load the saved model for inference the output is not good, the performance degrades. What is the reason for that?
BugReporterZ, thesillystudent and SebastianBodzaOsakana7777777 and SebastianBodza
Metadata
Metadata
Assignees
Labels
No labels