SpeContext: Enabling Efficient Long-context Reasoning with Speculative Context Sparsity in LLMs
Abstract
SpeContext leverages a distilled language model for efficient long-context reasoning, reducing parameters and improving throughput with minimal accuracy loss in both cloud and edge environments.
In this paper, we point out that the objective of the retrieval algorithms is to align with the LLM, which is similar to the objective of knowledge distillation in LLMs. We analyze the similarity in information focus between the distilled language model(DLM) and the original LLM from the perspective of information theory, and thus propose a novel paradigm that leverages a DLM as the retrieval algorithm. Based on the insight, we present SpeContext, an algorithm and system co-design for long-context reasoning. (1) At the algorithm level, SpeContext proposes lightweight retrieval head based on the head-level attention weights of DLM, achieving > 90% parameters reduction by pruning the redundancy. (2) At the system level, SpeContext designs an asynchronous prefetch dataflow via the elastic loading strategy, effectively overlapping KV cache retrieval with the LLM computation. (3) At the compilation level, SpeContext constructs the theoretical memory model and implements an adaptive memory management system to achieve acceleration by maximizing GPU memory utilization. We deploy and evaluate SpeContext in two resourceconstrained environments, cloud and edge. Extensive experiments show that, compared with the Huggingface framework, SpeContext achieves up to 24.89x throughput improvement in cloud and 10.06x speedup in edge with negligible accuracy loss, pushing the Pareto frontier of accuracy and throughput.
Community
SpeContext accpeted by ASPLOS'26, which is the third paper in the Spec series (SpecEE [ISCA'25], SpecDiff [AAAI'25 Oral]). We apply speculative methods to sparse contexts and demonstrate from an information theory perspective that model distillation indirectly enables small models to learn the context importance focus of the original LLM. Therefore, we use the distilled small model to predict the important contexts of original LLM in advance. In resource-constrained cloud and edge scenarios, we have respectively achieved up to 22x throughput improvement and 10x speedup.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- LouisKV: Efficient KV Cache Retrieval for Long Input-Output Sequences (2025)
- DELTA: Dynamic Layer-Aware Token Attention for Efficient Long-Context Reasoning (2025)
- SP-MoE: Speculative Decoding and Prefetching for Accelerating MoE-based Model Inference (2025)
- Efficient Low Rank Attention for Long-Context Inference in Large Language Models (2025)
- Lethe: Layer- and Time-Adaptive KV Cache Pruning for Reasoning-Intensive LLM Serving (2025)
- KVSwap: Disk-aware KV Cache Offloading for Long-Context On-device Inference (2025)
- FlexiCache: Leveraging Temporal Stability of Attention Heads for Efficient KV Cache Management (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper