What Layers When: Learning to Skip Compute in LLMs with Residual Gates
Abstract
GateSkip, a residual-stream gating mechanism, enables token-wise layer skipping in decoder-only LMs, improving efficiency and accuracy with minimal retraining.
We introduce GateSkip, a simple residual-stream gating mechanism that enables token-wise layer skipping in decoder-only LMs. Each Attention/MLP branch is equipped with a sigmoid-linear gate that condenses the branch's output before it re-enters the residual stream. During inference we rank tokens by the gate values and skip low-importance ones using a per-layer budget. While early-exit or router-based Mixture-of-Depths models are known to be unstable and need extensive retraining, our smooth, differentiable gates fine-tune stably on top of pretrained models. On long-form reasoning, we save up to 15\% compute while retaining over 90\% of baseline accuracy. On instruction-tuned models we see accuracy gains at full compute and match baseline quality near 50\% savings. The learned gates give insight into transformer information flow (e.g., BOS tokens act as anchors), and the method combines easily with quantization, pruning, and self-speculative decoding.
Community
Paper introduces a new way to intelligently skip layers in Transformers: learn to compress the output of Attention and MLP that goes into the residual stream, then later use the compression layer to assess how "important" a layer is for a given incoming hidden state, and skip unimportant layers.
🔥
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper