AI & ML interests
Large language models
Recent Activity
View all activity
Papers
View all Papers Articles
Vision encoders distilled from DINOv3 and SigLIP2 (MoE & Dense). CVPR 2026.
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝41Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 44 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 1.98k • 44 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 2.77k • 16
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 71 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 44 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 49.8k • 16
A series of powerful, universal and fine-tunable small Language Models
-
Falcon E Playground
💻8Chat with an advanced language model to get answers and engage in conversation
-
tiiuae/Falcon-E-3B-Base
Text Generation • 0.9B • Updated • 740 • 14 -
tiiuae/Falcon-E-3B-Instruct
Text Generation • 0.9B • Updated • 1.22k • 39 -
tiiuae/Falcon-E-3B-Instruct-GGUF
3B • Updated • 196 • 15
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 44 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 39.1k • 905 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 19.7k • 119 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 3.39k • 18
Falcon-Perception and Falcon-OCR model: early-fusion, natively multimodal, dense Autoregressive Transformer models.
7B models built on top of Falcon3-7B
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 36 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 52.3k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 13.2k • 73
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)
Vision encoders distilled from DINOv3 and SigLIP2 (MoE & Dense). CVPR 2026.
Falcon-Perception and Falcon-OCR model: early-fusion, natively multimodal, dense Autoregressive Transformer models.
A series of extremely small, yet powerful language models redefining capabilities at small scale
-
Falcon-H1-Tiny: A series of extremely small, yet powerful language models redefining capabilities at small scale
📝41Generate text using extremely small yet powerful language models
-
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 44 -
tiiuae/Falcon-H1-Tiny-90M-Instruct
Text Generation • 91.1M • Updated • 1.98k • 44 -
tiiuae/Falcon-H1-Tiny-90M-Instruct-GGUF
91.1M • Updated • 2.77k • 16
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned).
-
Falcon H1 Playground
🦅33Chat with Falcon-H1 language models
-
Falcon-H1: A Family of Hybrid-Head Language Models Redefining Efficiency and Performance
Paper • 2507.22448 • Published • 71 -
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers
Paper • 2601.04890 • Published • 44 -
tiiuae/Falcon-H1-0.5B-Base
Text Generation • 0.5B • Updated • 49.8k • 16
7B models built on top of Falcon3-7B
A series of powerful, universal and fine-tunable small Language Models
-
Falcon E Playground
💻8Chat with an advanced language model to get answers and engage in conversation
-
tiiuae/Falcon-E-3B-Base
Text Generation • 0.9B • Updated • 740 • 14 -
tiiuae/Falcon-E-3B-Instruct
Text Generation • 0.9B • Updated • 1.22k • 39 -
tiiuae/Falcon-E-3B-Instruct-GGUF
3B • Updated • 196 • 15
Arabic benchmark datasets https://arxiv.org/pdf/2507.15850
Falcon3 family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
This collection features the FalconMamba 7B base model, the instruction-tuned version, their 4-bit and GGUF variants, and the demo.
-
Falcon Mamba Playground
🐍65Generate chat responses using FalconMamba-7b model
-
Falcon Mamba: The First Competitive Attention-free 7B Language Model
Paper • 2410.05355 • Published • 36 -
tiiuae/falcon-mamba-7b
Text Generation • Updated • 52.3k • 242 -
tiiuae/falcon-mamba-7b-instruct
Text Generation • 7B • Updated • 13.2k • 73
-
The RefinedWeb Dataset for Falcon LLM: Outperforming Curated Corpora with Web Data, and Web Data Only
Paper • 2306.01116 • Published • 44 -
tiiuae/falcon-refinedweb
Viewer • Updated • 968M • 39.1k • 905 -
tiiuae/falcon-rw-1b
Text Generation • Updated • 19.7k • 119 -
tiiuae/falcon-rw-7b
Text Generation • 8B • Updated • 3.39k • 18
Leveraging Contextual Web Data for Fine-tuning Vision Language Models (https://arxiv.org/abs/2502.10250)