ColModernVBERT
Usage
This version should not be used: it is solely the base version useful for deterministic LoRA initialization.
Table of Contents
Overview
The ModernVBERT suite is a suite of compact 250M-parameter vision-language encoders, achieving state-of-the-art performance in this size class, matching the performance of models up to 10x larger.
For more information about ModernVBERT, please check the arXiv preprint.
Models
ColModernVBERTis the late-interaction version that is fine-tuned for visual document retrieval tasks, our most performant model on this task.BiModernVBERTis the bi-encoder version that is fine-tuned for visual document retrieval tasks.ModernVBERT-embedis the bi-encoder version after modality alignment (using a MLM objective) and contrastive learning, without document specialization.ModernVBERTis the base model after modality alignment (using a MLM objective).
Evaluation
ColModernVBERT matches the performance of models nearly 10x larger on visual document benchmarks. Additionally, it provides an interesting inference speed on CPU compared to the models of similar performance.
License
We release the ModernVBERT model architectures, model weights, and training codebase under the MIT license.
Citation
If you use ModernVBERT in your work, please cite:
@misc{teiletche2025modernvbertsmallervisualdocument,
title={ModernVBERT: Towards Smaller Visual Document Retrievers},
author={Paul Teiletche and Quentin Macé and Max Conti and Antonio Loison and Gautier Viaud and Pierre Colombo and Manuel Faysse},
year={2025},
eprint={2510.01149},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2510.01149},
}
[More Information Needed]
- Downloads last month
- 2

