Forgetting Transformer: Softmax Attention with a Forget Gate Paper • 2503.02130 • Published Mar 3 • 32
L^2M: Mutual Information Scaling Law for Long-Context Language Modeling Paper • 2503.04725 • Published Mar 6 • 21
Hybrid Architectures for Language Models: Systematic Analysis and Design Insights Paper • 2510.04800 • Published 23 days ago • 36