Llama-MedX v3.2 (GGUF)

Quantized build of the Llama-medx_v3.2 medical assistant model packaged for Ollama / llama.cpp runtimes. This export includes the Modelfile generated from the original Ollama registry entry and a GGUF binary derived from the upstream Hugging Face release.

Variant

Variant Size Blob
latest 1.88 GB sha256-dde5aa3fc5ffc17176b5e8bdc82f587b24b2678c6c66101bf7da77af9f7ccdff

Usage with Ollama

ollama create llama-medx-v32 -f modelfiles/llama-medx_v32--latest.Modelfile
ollama run llama-medx-v32

Source

Originally published on my Ollama profile: https://ollama.com/richardyoung/llama-medx_v32

Downloads last month
123
GGUF
Model size
3B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for richardyoung/llama-medx-v32

Quantized
(6)
this model