SPINAL -- Scaling-law and Preference Integration in Neural Alignment Layers
Abstract
SPINAL diagnoses how DPO alignment reshapes representations layer by layer, revealing geometric localization of preference gradients in final decoder blocks and enabling practical auditing of alignment progress.
Direct Preference Optimization (DPO) is a principled, scalable alternative to RLHF for aligning large language models from pairwise preferences, but its internal geometric footprint remains undercharacterized, limiting audits, checkpoint comparisons, and failure prediction. We introduce SPINAL (Scaling-law and Preference Integration in Neural Alignment Layers), a diagnostic that measures how alignment reshapes representations across depth by tracing localized structural change layer by layer. Across model families, DPO produces a layerwise calibration effect concentrated in the final decoder blocks (often layers 21-30), where preference gradients most directly affect the next-token distribution. SPINAL encodes each checkpoint as a depth trace over (layer index, contraction score, transport score). The contraction score summarizes how quickly the tail of a layer's spectrum decays (how fast small modes vanish); higher values indicate stronger contraction into fewer effective directions. The transport score summarizes how much the token distribution shifts between adjacent layers using a bounded overlap measure; lower values indicate shorter, smoother steps through representation space. Aligned checkpoints show a late-layer ramp-up in contraction and a smooth reduction in transport, consistent with tightened and stabilized policy mass, while unaligned models trace higher-curvature, more entropic, and geometrically incoherent depth paths. Overall, alignment is geometrically localized: the final layers encode the dominant preference-induced corrections. SPINAL turns this localization into a practical audit signal, quantifying where alignment concentrates, how strongly it manifests, and when it begins to destabilize during training.
Community
SPINAL -- Scaling-law and Preference Integration in Neural Alignment Layers
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- D-STEER - Preference Alignment Techniques Learn to Behave, not to Believe -- Beneath the Surface, DPO as Steering Vector Perturbation in Activation Space (2025)
- AlignMerge - Alignment-Preserving Large Language Model Merging via Fisher-Guided Geometric Constraints (2025)
- SR-GRPO: Stable Rank as an Intrinsic Geometric Reward for Large Language Model Alignment (2025)
- How Language Directions Align with Token Geometry in Multilingual LLMs (2025)
- Layer-wise Positional Bias in Short-Context Language Modeling (2026)
- Reflective Preference Optimization (RPO): Enhancing On-Policy Alignment via Hint-Guided Reflection (2025)
- Direct Diffusion Score Preference Optimization via Stepwise Contrastive Policy-Pair Supervision (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper