GGUF file for quick testing of WIP implmentation of llama.cpp Qwen2.5 VL.

You can find the lastest version of implmentation here. (Don't forget to switch to qwen25-vl branch)

You can also follow the llama.cpp draft PR here

Downloads last month
19
GGUF
Model size
3B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support