1B AWQ
					Collection
				
1 billion parameter models, quantized with AWQ.
					• 
				5 items
				• 
				Updated
					
				
Rho-1 base models employ Selective Language Modeling (SLM) for pretraining, which selectively trains on clean and useful tokens that aligned with the desired distribution.
Base model
microsoft/rho-math-1b-v0.1