fix: improved error handling when llama.cpp build fails #2358
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hi!
When I tried to run save_pretrained_gguf() I had multiple different errors. All problems were related to a failing LLama.cpp build. As you can see unter this pull request , curl for llama cpp is now activated by default, which is resulting for me in:
(System is a Ubuntu 24.04 with newest version of unsloth and llama.cpp. Curl is installed.)
So then I installed llama.cpp manually, but then got the error:
Thats right, because is it still under /llama.cpp/build/bin. As the save.py shows, all llama-* files gets copied to llama.cpp, when the build runs successfully. But in my case that dont happend and I had some misleading error messages. I hope that pull request decreases the amount of people having these issues or a clearer direction for debugging!