QuantFactory/NaturalLM-GGUF
This is quantized version of qingy2019/NaturalLM created using llama.cpp
Original Model Card
Uploaded model
- Developed by: qingy2019
 - License: apache-2.0
 - Finetuned from model : unsloth/mistral-nemo-base-2407-bnb-4bit
 
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
 - 98
 
							Hardware compatibility
						Log In
								
								to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
	Inference Providers
	NEW
	
	
	This model isn't deployed by any Inference Provider.
	๐
			
		Ask for provider support
