Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
bluuwhale
/
Jellywibble-lora_120k_pref_data_ep2-GGUF
like
0
GGUF
llama
conversational
Model card
Files
Files and versions
xet
Community
Deploy
Use this model
GGUF Quant of Jellywibble/lora_120k_pref_data_ep2
GGUF Quant of
Jellywibble/lora_120k_pref_data_ep2
Both static and Imat quant.
Downloads last month
29
GGUF
Model size
8B params
Architecture
llama
Chat template
Hardware compatibility
Log In
to view the estimation
4-bit
Q4_K_M
4.92 GB
Q4_K_M
4.92 GB
5-bit
Q5_K_M
5.73 GB
Q5_K_M
5.73 GB
6-bit
Q6_K
6.6 GB
Q6_K
6.6 GB
8-bit
Q8_0
8.54 GB
Q8_0
8.54 GB
16-bit
F16
16.1 GB
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Collection including
bluuwhale/Jellywibble-lora_120k_pref_data_ep2-GGUF
GGUF Quantize Model 🖥️
Collection
GGUF Model Quantize Weight
•
5 items
•
Updated
Aug 5, 2024
•
1