CLaRa: Bridging Retrieval and Generation with Continuous Latent Reasoning
Abstract
CLaRa enhances retrieval-augmented generation by introducing unified embedding-based compression and joint optimization, achieving state-of-the-art performance in QA benchmarks.
Retrieval-augmented generation (RAG) enhances large language models (LLMs) with external knowledge but still suffers from long contexts and disjoint retrieval-generation optimization. In this work, we propose CLaRa (Continuous Latent Reasoning), a unified framework that performs embedding-based compression and joint optimization in a shared continuous space. To obtain semantically rich and retrievable compressed vectors, we introduce SCP, a key-preserving data synthesis framework using QA and paraphrase supervision. CLaRa then trains the reranker and generator end-to-end via a single language modeling loss, with gradients flowing through both modules using a differentiable top-k estimator. Theoretically, this unified optimization aligns retrieval relevance with answer quality. Experiments across multiple QA benchmarks show that CLaRa achieves state-of-the-art compression and reranking performance, often surpassing text-based fine-tuned baselines.
Community
We introduce CLaRa: Continuous Latent Reasoning, the first end-to-end framework that unifies retrieval and generation within a continuous representation space. It employs Salient Compressor Pretraining (SCP) to learn semantically dense compressed vectors from QA and paraphrase signals, and uses differentiable top-k retrieval via Straight-Through estimation to make the retrieval process fully differentiable and trainable with weak supervision. By operating in a shared continuous latent space, CLaRa ensures that retrieval and generation rely on the same semantic representations, resulting in natural consistency. Empirically, it outperforms strong baselines across multiple QA benchmarks—even exceeding traditional RAG systems built on full text plus BGE embeddings—and achieves up to 16× context length reduction, greatly improving efficiency. Overall, CLaRa suggests a new direction for RAG: effective reasoning may rely not on long contexts, but on a unified latent reasoning space.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ARK: Answer-Centric Retriever Tuning via KG-augmented Curriculum Learning (2025)
- RECON: Reasoning with Condensation for Efficient Retrieval-Augmented Generation (2025)
- RaCoT: Plug-and-Play Contrastive Example Generation Mechanism for Enhanced LLM Reasoning Reliability (2025)
- Equipping Retrieval-Augmented Large Language Models with Document Structure Awareness (2025)
- MHA-RAG: Improving Efficiency, Accuracy, and Consistency by Encoding Exemplars as Soft Prompts (2025)
- Let LLMs Speak Embedding Languages: Generative Text Embeddings via Iterative Contrastive Refinement (2025)
- Domain-Specific Data Generation Framework for RAG Adaptation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 3
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper