These models are converted from meta-llama/Llama-3.2-3B, quantized with llama.cpp.
meta-llama/Llama-3.2-3B
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit