---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
datasets:
- OpenMMReasoner/OpenMMReasoner-SFT-874K
pipeline_tag: image-text-to-text
library_name: transformers
license: cc-by-nc-4.0
---
# OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
[](https://huggingface.co/collections/lmms-lab/openmmreasoner)
[](https://arxiv.org/abs/2511.16334)
[](https://evolvinglmms-lab.github.io/OpenMMReasoner/)
[](https://github.com/EvolvingLMMs-Lab/OpenMMReasoner)
## Overview
Recent advancements in large reasoning models have fueled growing interest in extending such capabilities to multimodal domains. However, despite notable progress in visual reasoning, the lack of transparent and reproducible data curation and training strategies remains a major barrier to scalable research.
In this work, we introduce **OpenMMReasoner**, a fully transparent two-stage recipe for multimodal reasoning spanning supervised fine-tuning (SFT) and reinforcement learning (RL). In the SFT stage, we construct an 874K-sample cold-start dataset with rigorous step-by-step validation, providing a strong foundation for reasoning capabilities. The subsequent RL stage leverages a 74K-sample dataset across diverse domains to further sharpen and stabilize these abilities, resulting in a more robust and efficient learning process. Extensive evaluations demonstrate that our training recipe not only surpasses strong baselines but also highlights the critical role of data quality and training design in shaping multimodal reasoning performance. Notably, our method achieves a 11.6% improvement over the Qwen2.5-VL-7B-Instruct baseline across nine multimodal reasoning benchmarks, establishing a solid empirical foundation for future large-scale multimodal reasoning research.
## Model Card
The model is the coldstart version of the OpenMMReasoner and was trained on https://huggingface.co/datasets/OpenMMReasoner/OpenMMReasoner-SFT-874K.
## Basic Usage
We present a very basic inference usage here for our model. Our model can be used just as Qwen2.5-VL-7B-Instruct and using vllm. For more detail about using and evaluation of our model, please visit [GitHub](https://github.com/EvolvingLMMs-Lab/OpenMMReasoner) for more information.
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
SYSTEM_PROMPT = (
"You are a helpful assistant. When the user asks a question, your response must include two parts: "
"first, the reasoning process enclosed in ... tags, then the final answer enclosed in ... tags."
"Please provide a clear, concise response within tags that directly addresses the question."
)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"OpenMMReasoner/OpenMMReasoner-ColdStart", torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained("OpenMMReasoner/OpenMMReasoner-ColdStart")
messages = [
{
"role": "system",
"content": [
{"type": "text", "text": SYSTEM_PROMPT},
],
},
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Evaluation Results
Our **OpenMMReasoner-7B (OMR-7B)** model demonstrates strong performance across a comprehensive suite of multimodal reasoning benchmarks. With only 874K SFT samples and 74K RL samples—significantly less data than many competing methods—our model achieves state-of-the-art or highly competitive results on 9 out of 14 benchmark tasks. Notably, OMR-7B achieves **79.5%** on MathVista testmini (best among all models), **63.8%** on MathVerse testmini (best), and **79.0%** on WeMath loose (best), demonstrating the effectiveness of our transparent two-stage training recipe. This performance validates our emphasis on data quality and rigorous training design over simply scaling dataset size.
| Model | SFT Data | RL Data | MathVista
testmini | MathVision
test | MathVision
testmini | MathVerse
testmini | DynaMath
worst | WeMath
loose | LogicVista
test | MMMU
val | MMMU-Pro
standard | MMMU-Pro
vision | CharXiv
reas. | CharXiv
desc. |
|-------|----------|---------|------------------------|---------------------|-------------------------|------------------------|--------------------|--------------------|---------------------|--------------|-----------------------|---------------------|-------------------|-------------------|
| VLAA-Thinker-Qwen2.5-7B | 126k | 25k | 68.0 | 26.4 | - | 48.2 | 22.4 | - | 48.5 | - | - | - | - | - |
| ThinkLite-7B-VL | - | 11k | 71.6 | 24.6 | - | 42.9 | 16.5 | - | 42.7 | - | - | - | - | - |
| VL-Rethinker-7B | - | 39k | 73.7 | 28.4 | - | 46.4 | 17.8 | - | 42.7 | - | 41.7 | - | - | - |
| M2-Reasoning | 6.2M | 102k | 75.0 | 42.1 | - | 40.4 | - | - | 50.6 | - | - | - | - | - |
| MMR1 | 1.6M | 15k | 72.0 | 31.8 | 29.0† | 55.4 | 27.9† | 68.0† | 48.9 | 52.4† | 41.1† | 37.1† | 43.5† | 71.1† |
| OpenVLThinker-7B | 3.3k | 9.6k | 65.3 | 23.0 | 26.9† | 38.1 | 16.8 | 61.9† | 44.5 | 55.1† | 39.7† | 38.4† | 41.0† | 69.2† |
| MM-Eureka-Qwen-7B | - | 15.6k | 72.6 | 28.1 | 32.1† | 45.4 | 23.0 | 59.8† | 46.3 | 54.4† | 40.1† | 37.1† | 42.4† | 74.1† |
| OVR-7B | 2M | 300k | 72.1 | **51.8** | 38.2† | 54.6 | 33.5 | 64.8 | **54.8** | 51.8† | **50.2** | 29.1† | 44.5 | 73.6 |
| **OMR-7B (ours)** | **874k** | **74k** | **79.5** | 43.6 | **38.8** | **63.8** | **34.9** | **79.0** | 50.0 | **57.8** | 44.1 | **40.6** | **46.1** | 73.5 |
**Note:** Bold numbers indicate the best performance, and † indicates results reproduced using the authors' checkpoints.
## Citation
If you find OpenMMReasoner useful for your research and applications, please cite using this BibTeX:
```bibtex
@misc{zhang2025openmmreasonerpushingfrontiersmultimodal,
title={OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe},
author={Kaichen Zhang and Keming Wu and Zuhao Yang and Kairui Hu and Bin Wang and Ziwei Liu and Xingxuan Li and Lidong Bing},
year={2025},
eprint={2511.16334},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2511.16334},
}
```
## Acknowledgements
We gratefully acknowledge the following open-source projects that made this work possible:
- [**lmms-eval**](https://github.com/EvolvingLMMs-Lab/lmms-eval) for providing the comprehensive evaluation framework for large multimodal models.
- [**lmms-engine**](https://github.com/EvolvingLMMs-Lab/lmms-engine) for the SFT training infrastructure and tools.
- [**verl**](https://github.com/volcengine/verl) for the reinforcement learning training framework.
We thank the developers and contributors of these projects for their excellent work and for making their code publicly available.