You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Bug] Unsloth cannot convert fine-tuned model based on unsloth/phi-4 to GGUF because embedded llama.cpp does not support the architecture LlamaModel #2365
Describe the bug
Unsloth cannot convert fine-tuned model based on unsloth/phi-4 to GGUF because embedded llama.cpp does not support the architecture LlamaModel which was embedded in phi-4 by unsloth/phi-4 as a bug fix.
Environment Setup:
OS: [e.g., Ubuntu 2.04]
Python Version: [e.g., 3.10]
Frameworks/Libraries: unsloth
colab / script - was this run in colab or as a script: trying both and same result llama.cpp error (no support for LLamaModel.
2/3. Model Details:
Model ID: unsloth/phi-4
Model Configuration: [e.g., lora params, quantization, etc.]