InfLLM-V2: Dense-Sparse Switchable Attention for Seamless Short-to-Long Adaptation
Abstract
A dense-sparse switchable attention framework, InfLLM-V2, enhances long-sequence processing in large language models by efficiently adapting between dense and sparse attention mechanisms.
Long-sequence processing is a critical capability for modern large language models. However, the self-attention mechanism in the standard Transformer architecture faces severe computational and memory bottlenecks when processing long sequences. While trainable sparse attention methods offer a promising solution, existing approaches such as NSA introduce excessive extra parameters and disrupt the conventional pretrain-on-short, finetune-on-long workflow, resulting in slow convergence and difficulty in acceleration. To overcome these limitations, we introduce dense-sparse switchable attention framework, termed as InfLLM-V2. InfLLM-V2 is a trainable sparse attention that seamlessly adapts models from short to long sequences. Specifically, InfLLM-V2 reuses dense attention parameters through parameter-free architecture modification, maintaining consistency between short and long sequence processing. Additionally, InfLLM-V2 ensures computational efficiency across all sequence lengths, by using dense attention for short inputs and smoothly transitioning to sparse attention for long sequences. To achieve practical acceleration, we further introduce an efficient implementation of InfLLM-V2 that significantly reduces the computational overhead. Our experiments on long-context understanding and chain-of-thought reasoning demonstrate that InfLLM-V2 is 4times faster than dense attention while retaining 98.1% and 99.7% of the performance, respectively. Based on the InfLLM-V2 framework, we have trained and open-sourced MiniCPM4.1 (https://huggingface.co/openbmb/MiniCPM4.1-8B), a hybrid reasoning model, providing a reproducible implementation for the research community.
Community
✨ InfLLM‑V2: Seamless Long‑Context Adaptation
1️⃣ Ultra‑fast adaptation: only 5B long‑text tokens to train sparse attention (vs. ~1T in DSA from DeepSeek-V3.2).
2️⃣ End‑to‑end speedups: 2.1× prefill, 2.3× decode; up to 4–9× kernel speedup at 128K.
Top results on long‑context benchmarks and strong deep‑thinking performance—keeping 98.1%/99.7% of dense accuracy while being much faster.
Try the first open sparse‑native model:
MiniCPM4.1‑8B: https://huggingface.co/openbmb/MiniCPM4.1-8B
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ProxyAttn: Guided Sparse Attention via Representative Heads (2025)
- SlimInfer: Accelerating Long-Context LLM Inference via Dynamic Token Pruning (2025)
- Flash Sparse Attention: An Alternative Efficient Implementation of Native Sparse Attention Kernel (2025)
- PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference (2025)
- CCF: A Context Compression Framework for Efficient Long-Sequence Language Modeling (2025)
- UniGist: Towards General and Hardware-aligned Sequence-level Long Context Compression (2025)
- LeanK: Learnable K Cache Channel Pruning for Efficient Decoding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper