Papers
arxiv:2511.18659

CLaRa: Bridging Retrieval and Generation with Continuous Latent Reasoning

Published on Nov 24
· Submitted by jie he on Nov 26
Authors:
,
,
,
,
,

Abstract

CLaRa enhances retrieval-augmented generation by introducing unified embedding-based compression and joint optimization, achieving state-of-the-art performance in QA benchmarks.

AI-generated summary

Retrieval-augmented generation (RAG) enhances large language models (LLMs) with external knowledge but still suffers from long contexts and disjoint retrieval-generation optimization. In this work, we propose CLaRa (Continuous Latent Reasoning), a unified framework that performs embedding-based compression and joint optimization in a shared continuous space. To obtain semantically rich and retrievable compressed vectors, we introduce SCP, a key-preserving data synthesis framework using QA and paraphrase supervision. CLaRa then trains the reranker and generator end-to-end via a single language modeling loss, with gradients flowing through both modules using a differentiable top-k estimator. Theoretically, this unified optimization aligns retrieval relevance with answer quality. Experiments across multiple QA benchmarks show that CLaRa achieves state-of-the-art compression and reranking performance, often surpassing text-based fine-tuned baselines.

Community

Paper submitter

We introduce CLaRa: Continuous Latent Reasoning, the first end-to-end framework that unifies retrieval and generation within a continuous representation space. It employs Salient Compressor Pretraining (SCP) to learn semantically dense compressed vectors from QA and paraphrase signals, and uses differentiable top-k retrieval via Straight-Through estimation to make the retrieval process fully differentiable and trainable with weak supervision. By operating in a shared continuous latent space, CLaRa ensures that retrieval and generation rely on the same semantic representations, resulting in natural consistency. Empirically, it outperforms strong baselines across multiple QA benchmarks—even exceeding traditional RAG systems built on full text plus BGE embeddings—and achieves up to 16× context length reduction, greatly improving efficiency. Overall, CLaRa suggests a new direction for RAG: effective reasoning may rely not on long contexts, but on a unified latent reasoning space.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2511.18659 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2511.18659 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.