TEQ: Trainable Equivalent Transformation for Quantization of LLMs Paper β’ 2310.10944 β’ Published Oct 17, 2023 β’ 10
Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs Paper β’ 2309.05516 β’ Published Sep 11, 2023 β’ 10