 
				cmarkea/Meta-Llama-3.1-70B-Instruct-4bit
			Text Generation
			• 
		
				37B
			• 
	
				Updated
					
				
				• 
					
					12
				
	
				• 
					
					1
				
Large model quantized with post-quantization performance very close to the original models, allowing it to run on reasonable infrastructure.
 
				 
				 
				 
				 
				 
				 
				 
				