Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Darkhn-Quants
/
M3.2-24B-Animus-V7.0-GGUF
like
0
Follow
Darkhn-Quants
7
GGUF
llama.cpp
imatrix
conversational
License:
mit
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
main
M3.2-24B-Animus-V7.0-GGUF
110 GB
1 contributor
History:
54 commits
Darkhn
Update README.md
909a93f
4 months ago
.gitattributes
Safe
3.23 kB
Add IQ1_M GGUF quant: M3.2-24B-Animus-V7.0-IQ1_M.gguf
4 months ago
M3.2-24B-Animus-V7.0-IQ4_NL.gguf
Safe
13.5 GB
xet
Add IQ4_NL GGUF quant: M3.2-24B-Animus-V7.0-IQ4_NL.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q2_K.gguf
Safe
8.89 GB
xet
Add Q2_K GGUF quant: M3.2-24B-Animus-V7.0-Q2_K.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q3_K_L.gguf
Safe
12.4 GB
xet
Add Q3_K_L GGUF quant: M3.2-24B-Animus-V7.0-Q3_K_L.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q4_K_M.gguf
Safe
14.3 GB
xet
Add Q4_K_M GGUF quant: M3.2-24B-Animus-V7.0-Q4_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q5_K_M.gguf
Safe
16.8 GB
xet
Add Q5_K_M GGUF quant: M3.2-24B-Animus-V7.0-Q5_K_M.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q6_K.gguf
Safe
19.3 GB
xet
Add Q6_K GGUF quant: M3.2-24B-Animus-V7.0-Q6_K.gguf
4 months ago
M3.2-24B-Animus-V7.0-Q8_0.gguf
Safe
25.1 GB
xet
Add Q8_0 GGUF quant: M3.2-24B-Animus-V7.0-Q8_0.gguf
4 months ago
README.md
Safe
593 Bytes
Update README.md
4 months ago