view article Article Unbelievable! Run 70B LLM Inference on a Single 4GB GPU with This NEW Technique Nov 30, 2023 • 34