qwen2.5vl - GGUF

This model was finetuned and converted to GGUF format using Unsloth.

Example usage:

  • For text only LLMs: llama-cli --hf repo_id/model_name -p "why is the sky blue?"
  • For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf

Available Model files:

  • Qwen2.5-VL-7B-Instruct.Q5_K_M.gguf
  • Qwen2.5-VL-7B-Instruct.Q8_0.gguf
  • Qwen2.5-VL-7B-Instruct.Q4_K_M.gguf
  • Qwen2.5-VL-7B-Instruct.BF16-mmproj.gguf

⚠️ Ollama Note for Vision Models

Important: Ollama currently does not support separate mmproj files for vision models.

To create an Ollama model from this vision model:

  1. Place the Modelfile in the same directory as the finetuned bf16 merged model
  2. Run: ollama create model_name -f ./Modelfile (Replace model_name with your desired name)

This will create a unified bf16 model that Ollama can use.

Downloads last month
62
GGUF
Model size
8B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support