-
-
Notifications
You must be signed in to change notification settings - Fork 4k
Closed
Description
I used the method “model.save_pretrained_merged(lora_path, tokenizer, save_method = "merged_16bit")” to save the base model and adapter, but when I tested,the model can only answer in the input data format I gave, and I can't get the inference process.What‘s the reason for this?
Also, I tested the llama3 8b model downloaded by unsloth on my specific task, and his answer was not as good as the original llama3 8b.Will its performance be different from the original llama3?
Metadata
Metadata
Assignees
Labels
No labels