Model - Pixtral 12B (GGUF) (Ollama Patched)
Description:
This is an Ollama patched version of Pixtral-12B (GGUF) with working projector (mmproj) files. No modifications, edits, or configuration is required to use this model with Ollama, it works natively! Both Vision and Text work with Ollama. (^.^)
Model Updates (Last updated: 29th of October, 2025)
Already finished updates - [As of: 28th of October, 2025]:
- Additional quantized GGUF model files added
- Fixed Q8_0 GGUF file template & re-uploaded
- Additional data added to ModelCard (this page)
Currently in-progress updates:
- Adding more Quantized and iMatrix-Quantized GGUF files
- Additional data added to ModelCard (this page)
How to run this Model using Ollama
You can run this model by using the "ollama run" command. Simply copy & paste one of the commands from the list below into your console, terminal or power-shell window.
| Qaunt Type | File Size | Command |
|---|---|---|
| Q2_K | 4.9 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q2_K |
| Q3_K_S | 5.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_S |
| Q3_K_M | 6.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q3_K_M |
| Q4_K_S | 7.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_S |
| Q4_K_M | 7.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q4_K_M |
| Q5_K_S | 8.6 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_S |
| Q5_K_M | 8.8 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q5_K_M |
| Q6_K | 10.2 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q6_K |
| Q8_0 | 13.1 GB | ollama run hf.co/EnlistedGhost/Pixtral-12B-Ollama-GGUF:Q8_0 |
Intended Use
Same as original:
Out-of-Scope Use
Same as original:
Bias, Risks, and Limitations
Same as original:
Training Details
Training sets and data are from:
(This is a direct off-shoot/decendant of the above mentioned models)
Evaluation
- This model has NOT been evaluated in any form, scope or type of method.
- !!! USE AT YOUR OWN RISK !!!
- !!! NO WARRANTY IS PROVIDED OF ANY KIND !!!
Citation (Original Paper)
[MistalAI Pixtral-12B Original Paper]
Detailed Release Information
- Originally Developed by: [mistralai]
- Further Developed by: [mistralai-community]
- MMPROJ (Vision) Quantized by: [ggml-org]
- Quantized for GGUF by: [ggml-org], [mrmadermacher], [bartowski]
- Modified for Ollama by: [EnlistedGhost]
- Released on Huggingface by: [EnlistedGhost]
- Model type & format: [Quantized/GGUF]
- License type: [Apache-2.0]
Attributions (Credits)
A big thank-you to the following people and community members! Their contributions are what made this release possible!
Important Notice: This is NOT a copy/paste release, the files and information used were altered and changed to properly work with Ollama software. This resulted inthe first publicly available Pixtral-12B model that natively runs on Ollama.
Model Card Authors and Contact
- Downloads last month
- 478
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for EnlistedGhost/Pixtral-12B-Ollama-GGUF
Base model
mistral-community/pixtral-12b