Replies: 3 comments 1 reply
-
|
just updating the requirements.txt file to use the latest version of the llamacpp (v0.83.0 https://github.com/oobabooga/llama-cpp-binaries/releases) and apply it (pip install -r requirements.txt) make it work flawlessly. |
Beta Was this translation helpful? Give feedback.
-
|
Is that windows + cu124 + nvidia? Both edit to 0.83 and switching to the dev branch are pretty consistently giving me a Error loading the model with llama.cpp: Server process terminated unexpectedly with exit code: for models that load just fine in main @0.74. |
Beta Was this translation helpful? Give feedback.
-
|
Followup: current main version as of Mar 7 dev->main pull is working great for me on both old models and 3.5, and I'm finding 3.5 to be a solid upgrade over 3 VL for image captioning tasks. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Any chance of a llama.cpp update to pick up support for Qwen 3.5?
Beta Was this translation helpful? Give feedback.
All reactions