Prerequisites
Feature Description
Some of them are MOE (some are based on Qwen3):
https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B
https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B
It should be now SOTA in open source VL models (based on InternLM Lab's report)
GGUF here:
https://huggingface.co/QuantStack/InternVL3_5-30B-A3B-gguf/tree/main
It would be great if ik_llama.cpp would support it.
Thankyou!!
Motivation
it could be run in mainline llama.cpp (tesed) , but the speed is not optimized, could be greatly enhanced in ik-llama.cpp in hybrid CPU+GPU inference.
Possible Implementation
No response
Prerequisites
Feature Description
Some of them are MOE (some are based on Qwen3):
https://huggingface.co/OpenGVLab/InternVL3_5-30B-A3B
https://huggingface.co/OpenGVLab/InternVL3_5-241B-A28B
It should be now SOTA in open source VL models (based on InternLM Lab's report)
GGUF here:
https://huggingface.co/QuantStack/InternVL3_5-30B-A3B-gguf/tree/main
It would be great if ik_llama.cpp would support it.
Thankyou!!
Motivation
it could be run in mainline llama.cpp (tesed) , but the speed is not optimized, could be greatly enhanced in ik-llama.cpp in hybrid CPU+GPU inference.
Possible Implementation
No response