Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
bluuwhale
/
infinity-franken-GGUF-IQ-Imatrix
like
0
GGUF
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
No model card
Downloads last month
22
GGUF
Model size
11B params
Architecture
llama
Hardware compatibility
Log In
to view the estimation
3-bit
IQ3_S
4.69 GB
IQ3_M
4.85 GB
4-bit
Q4_K_M
6.46 GB
5-bit
Q5_K_S
7.4 GB
Q5_K_M
7.6 GB
6-bit
Q6_K
8.81 GB
8-bit
Q8_0
11.4 GB
16-bit
F16
21.5 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Collection including
bluuwhale/infinity-franken-GGUF-IQ-Imatrix
GGUF Quantize Model 🖥️
Collection
GGUF Model Quantize Weight
•
5 items
•
Updated
Aug 5, 2024
•
1