VQRAE: Representation Quantization Autoencoders for Multimodal Understanding, Generation and Reconstruction
Abstract
VQRAE, a Vector Quantization Representation AutoEncoder, unifies multimodal understanding, generation, and reconstruction using a unified tokenizer with continuous semantic features and discrete tokens.
Unifying multimodal understanding, generation and reconstruction representation in a single tokenizer remains a key challenge in building unified models. Previous research predominantly attempts to address this in a dual encoder paradigm, e.g., utilizing the separate encoders for understanding and generation respectively or balancing semantic representations and low-level features with contrastive loss. In this paper, we propose VQRAE, a Vector Quantization version of Representation AutoEncoders, which pioneers the first exploration in unified representation to produce Continuous semantic features for image understanding and Discrete tokens for visual generation within a unified tokenizer. Specifically, we build upon pretrained vision foundation models with a symmetric ViT decoder and adopt a two-stage training strategy: first, it freezes the encoder and learns a high-dimensional semantic VQ codebook with pixel reconstruction objective; then jointly optimizes the encoder with self-distillation constraints. This design enables negligible semantic information for maintaining the ability of multimodal understanding, discrete tokens that are compatible for generation and fine-grained reconstruction. Besides, we identify the intriguing property in quantizing semantic encoders that rely on high-dimensional codebook in contrast to the previous common practice of low-dimensional codebook in image reconstruction. The semantic VQ codebook can achieve a 100% utilization ratio at a dimension of 1536. VQRAE presents competitive performance on several benchmarks of visual understanding, generation and reconstruction with promising scaling property in the autoregressive paradigm for its discrete merits.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Visual Generation Tuning (2025)
- DINO-Tok: Adapting DINO for Visual Tokenizers (2025)
- GloTok: Global Perspective Tokenizer for Image Reconstruction and Generation (2025)
- Adapting Self-Supervised Representations as a Latent Space for Efficient Generation (2025)
- DeRA: Decoupled Representation Alignment for Video Tokenization (2025)
- VAEVQ: Enhancing Discrete Visual Tokenization through Variational Modeling (2025)
- TUNA: Taming Unified Visual Representations for Native Unified Multimodal Models (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
