Falcon 2: An 11B parameter pretrained language model and VLM, trained on over 5000B tokens tokens and 11 languages May 24, 2024 • 27
Investigating Regularization of Self-Play Language Models Paper • 2404.04291 • Published Apr 4, 2024 • 1
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65
Performance Gaps in Multi-view Clustering under the Nested Matrix-Tensor Model Paper • 2402.10677 • Published Feb 16, 2024
Investigating Regularization of Self-Play Language Models Paper • 2404.04291 • Published Apr 4, 2024 • 1
Do Vision and Language Encoders Represent the World Similarly? Paper • 2401.05224 • Published Jan 10, 2024
NeurIPS 2025 E2LM Competition : Early Training Evaluation of Language Models Paper • 2506.07731 • Published Jun 9 • 2
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance Paper • 2507.22448 • Published Jul 30 • 65