Papers
arxiv:2510.13876

What Layers When: Learning to Skip Compute in LLMs with Residual Gates

Published on Oct 13
Authors:
,
,

Abstract

GateSkip, a residual-stream gating mechanism, enables token-wise layer skipping in decoder-only LMs, improving efficiency and accuracy with minimal retraining.

AI-generated summary

We introduce GateSkip, a simple residual-stream gating mechanism that enables token-wise layer skipping in decoder-only LMs. Each Attention/MLP branch is equipped with a sigmoid-linear gate that condenses the branch's output before it re-enters the residual stream. During inference we rank tokens by the gate values and skip low-importance ones using a per-layer budget. While early-exit or router-based Mixture-of-Depths models are known to be unstable and need extensive retraining, our smooth, differentiable gates fine-tune stably on top of pretrained models. On long-form reasoning, we save up to 15\% compute while retaining over 90\% of baseline accuracy. On instruction-tuned models we see accuracy gains at full compute and match baseline quality near 50\% savings. The learned gates give insight into transformer information flow (e.g., BOS tokens act as anchors), and the method combines easily with quantization, pruning, and self-speculative decoding.

Community

Paper author

Paper introduces a new way to intelligently skip layers in Transformers: learn to compress the output of Attention and MLP that goes into the residual stream, then later use the compression layer to assess how "important" a layer is for a given incoming hidden state, and skip unimportant layers.

🔥

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.13876 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2510.13876 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2510.13876 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.