Safetensors
qwen2_5_vl

Improve model card with metadata, description, links, and usage example

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +100 -4
README.md CHANGED
@@ -1,8 +1,104 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - Senqiao/VisionThink-Smart-Train
5
  - Senqiao/VisionThink-Smart-Val
6
- base_model:
7
- - Qwen/Qwen2.5-VL-7B-Instruct
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-7B-Instruct
4
  datasets:
5
  - Senqiao/VisionThink-Smart-Train
6
  - Senqiao/VisionThink-Smart-Val
7
+ license: apache-2.0
8
+ pipeline_tag: image-text-to-text
9
+ library_name: transformers
10
+ ---
11
+
12
+ # VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning
13
+
14
+ This repository contains the official model for **VisionThink**, a novel vision-language model (VLM) that dynamically processes images with varying resolutions to optimize efficiency without sacrificing performance. It intelligently decides whether a downsampled image is sufficient for problem-solving, requesting higher-resolution images only when necessary. This approach distinguishes it from existing efficient VLM methods that rely on fixed compression ratios or thresholds.
15
+
16
+ VisionThink demonstrates strong fine-grained visual understanding capability on OCR-related tasks, while also saving substantial visual tokens on simpler tasks. It achieves this by adopting reinforcement learning and proposing the LLM-as-Judge strategy for general VQA tasks, coupled with a carefully designed reward function and penalty mechanism to achieve a stable and reasonable image resize call ratio.
17
+
18
+ **Paper:** [VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning](https://huggingface.co/papers/2507.13348)
19
+ **Code:** [dvlab-research/VisionThink](https://github.com/dvlab-research/VisionThink)
20
+
21
+ <p align="center" width="100%">
22
+ <img src="https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/VisionThink.jpg" alt="VisionThink Overview" style="width: 100%; min-width: 300px; display: block; margin: auto;">
23
+ </p>
24
+
25
+ ## ✨ Highlights
26
+ <p align="center" width="80%">
27
+ <img src="https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/Framework.jpg" alt="VisionThink Framework" style="width: 80%; min-width: 300px; display: block; margin: auto;">
28
+ </p>
29
+
30
+ 1. Our VisionThink leverages reinforcement learning to **autonomously** learn whether to reduce visual tokens. Compared to traditional efficient VLM approaches, our method achieves significant improvements on **fine-grained** benchmarks, such as those involving OCR-related tasks.
31
+ 2. VisionThink improves performance on **General VQA** tasks while reducing visual tokens by **50%**, achieving **102%** of the original model’s performance across nine benchmarks.
32
+ 3. VisionThink achieves strong performance and efficiency by simply resizing input images to reduce visual tokens. We hope this inspires further research into **Efficient Reasoning Vision Language Models**.
33
+
34
+ ## 🚀 Usage
35
+
36
+ You can use VisionThink with the Hugging Face `transformers` library. This model (Senqiao/VisionThink-Efficient) is based on `Qwen2.5-VL-7B-Instruct`.
37
+
38
+ First, ensure you have the `transformers` library and `Pillow` installed:
39
+ ```bash
40
+ pip install transformers Pillow requests
41
+ ```
42
+
43
+ Here's an example of how to use the model for inference:
44
+
45
+ ```python
46
+ from transformers import AutoProcessor, AutoModelForCausalLM
47
+ from PIL import Image
48
+ import requests
49
+
50
+ # Load the model and processor
51
+ # This repository corresponds to "Senqiao/VisionThink-Efficient".
52
+ # You might also find "Senqiao/VisionThink-General" on the Hub.
53
+ model_id = "Senqiao/VisionThink-Efficient"
54
+ processor = AutoProcessor.from_pretrained(model_id, trust_remote_code=True)
55
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True)
56
+
57
+ # Load an example image (using an image from the project's GitHub for consistency)
58
+ image_url = "https://raw.githubusercontent.com/dvlab-research/VisionThink/main/files/VisionThink.jpg"
59
+ image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
60
+
61
+ # Define your text prompt
62
+ text_input = "Describe the image in detail. What is the title?"
63
+
64
+ # Prepare messages in chat format
65
+ # VisionThink can dynamically request higher resolution, but for basic usage,
66
+ # you interact with it like a standard VLM.
67
+ messages = [
68
+ {
69
+ "role": "user",
70
+ "content": [
71
+ {"type": "image", "image": image},
72
+ {"type": "text", "text": text_input},
73
+ ],
74
+ }
75
+ ]
76
+
77
+ # Apply chat template and process inputs
78
+ text = processor.apply_chat_template(
79
+ messages, tokenize=False, add_generation_prompt=True
80
+ )
81
+ inputs = processor(text=[text], images=[image], return_tensors="pt")
82
+ inputs = {k: v.to(model.device) for k, v in inputs.items()}
83
+
84
+ # Generate response
85
+ generated_ids = model.generate(**inputs, max_new_tokens=512)
86
+
87
+ # Decode the response
88
+ response = processor.batch_decode(generated_ids[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)[0]
89
+ print(response)
90
+
91
+ ```
92
+
93
+ ## 📝 Citation
94
+
95
+ If you find this project useful in your research, please consider citing:
96
+
97
+ ```bibtex
98
+ @article{yang2025visionthink,
99
+ author={Yang, Senqiao and Li, Junyi and Lai, Xin and Yu, Bei and Zhao, Hengshuang and Jia, Jiaya},
100
+ title={VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning},
101
+ journal={arXiv preprint arXiv:2507.13348},
102
+ year={2025}
103
+ }
104
+ ```