GUIAgent commited on
Commit
3f4631a
·
verified ·
1 Parent(s): 2887468

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +508 -3
README.md CHANGED
@@ -1,3 +1,508 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - GUIAgent/Magic-RICH
5
+ language:
6
+ - en
7
+ base_model:
8
+ - Qwen/Qwen2-VL-7B-Instruct
9
+ ---
10
+ ## News
11
+
12
+ * [2025-07-20] 📄📄📄 We have released the **technical report** of MagicGUI! Check it out [here](https://arxiv.org/abs/2508.03700).
13
+ * [2025-07-20] 🚀🚀🚀 We have open-sourced **MagicGUI**, an on-device GUI agent capable of operating Chinese & English apps and equipped with RFT-enhanced reasoning abilities.
14
+
15
+ ## Overview
16
+
17
+ MagicGUI is an open-source GUI agent model developed by Honor, built on Qwen2-VL with 7 billion parameters. It demonstrates outstanding capabilities in visual grounding, screen question answering, and action sequence planning and execution. MagicGUI enables multimodal perception, understanding, and automated execution of user tasks on mobile devices.
18
+
19
+ **Data Collection Framework**: Propose a scalable and modular framework for GUI data collection that efficiently gathers high-quality data on mobile devices.
20
+
21
+ **Powerful Perception and Grounding Capabilities**: Enhance the perception and grounding abilities on mobile device screens by integrating large-scale knowledge through tasks such as element referring, element grounding, and screen captioning.
22
+
23
+ **Unified Action Space**: Develop a comprehensive and unified action space for various mobile platforms, encompassing fundamental operations like Tap, Text Input, and Scroll, while also supporting more complex actions such as Wait, Drag, and Takeover.
24
+
25
+ **Planning-Oriented Reasoning**: Implement a planning-oriented reasoning mechanism to improve the stability of task execution and enhance the accuracy of action decisions in dynamic environments.
26
+
27
+ **Two-Stage Training Paradigm**: Strengthen core perception, localization, and navigation capabilities through Continued Pre-training (CPT), while enhancing model robustness and generalization via Reinforcement Fine-tuning (RFT).
28
+
29
+ ## Framework
30
+ The overall training framework of our MagicGUI contains two stages:
31
+
32
+ **Stage I**: Continue Pre-training (CPT), which involves training a
33
+ foundational model on a large and diverse dataset followed by an annealing phase using a balanced and high-quality
34
+ dataset.
35
+
36
+ **Stage II**: Reinforcement Fine-tuning (RFT), aimed at further enhancing the
37
+ model’s robustness and generalization capabilities.
38
+
39
+ ## Quick Start
40
+ ### Install dependencies
41
+
42
+ ```bash
43
+ git clone https://github.com/MagicAgent-GUI
44
+ cd MagicGUI
45
+ conda create -n gui_agent python=3.11
46
+ conda activate gui_agent
47
+ pip install -r requirements.txt
48
+ ```
49
+ ### Download the model
50
+
51
+ Download [MagicGUI-RFT](https://huggingface.co/GUIAgent/MagicGUI_RFT) and [MagicGUI-CPT](https://huggingface.co/GUIAgent/MagicGUI_CPT).
52
+
53
+ #### Huggingface Inference
54
+
55
+ ```python
56
+ import torch
57
+ from utils.model import Qwen2VLChat
58
+
59
+ # 1. Load the model and tokenizer
60
+ model_path = "./models/RFT" # model path
61
+ model = Qwen2VLChat.from_pretrained(model_path, min_pixels=4*28*28, max_pixels=768*28*28)
62
+ model = model.to("cuda:0")
63
+
64
+ # 2. Build the input
65
+ instruction = """你是一个训练有素的手机智能体,能够帮助用户进行单步导航任务。已知当前智能手机的截图<image>,和用户指令"查看会员信息"请输出正确的函数调用以实现用户指令。除了函数调用之外,你不能输出任何其他内容。你可以调用以下函数来控制智能手机:- UI基础操作:1. tap(x: float,y: float) 该函数用于在智能手机屏幕上点击特定点。坐标 x 和 y 表示待点击控件的中心位置。2. scroll(x: float,y: float,direction: str) 该函数用于从起始坐标 (x,y) 开始在智能手机屏幕上滑动操作,方向为手指滑动的方向。坐标 x 和 y 表示屏幕上待滑动控件的中心位置。方向可以是 "up"、"down"、"left" 或 "right"。3. text(x: float,y: float,text_input: str) 该函数用于在智能手机屏幕上输入指定的text。坐标 x 和 y 表示待点击控件的中心位置。- 手机按键操作:4. navigate_back() 该函数用于返回智能手机的上一个屏幕。5. navigate_home() 该函数用于返回手机的home screen或关闭当前应用。- 其他操作:6. long_press(x: float,y: float) 该函数用于在智能手机屏幕上的特定点执行长按操作。坐标 x 和 y 表示待点击控件的中心位置。7. wait() 该函数表示在当前页面等候。8. enter() 该函数表示按下enter键。9. take_over(text_input: str) 该函数用于提示用户接管智能手机,其中 text_input 是提示用户接管手机的原因。如果原因不确定,请填写“请您接管当前界面”。10. drag(x1: float,y1: float,x2: float,y2: float) 该函数执行一个对起始和终点敏感的拖动操作,表示手指从点1拖到点2。常见的场景包括滑块拖动、滚动选择器拖动和图片裁剪。11. screen_shot() 该函数用于截图。12. long_screen_shot() 该函数执行长截图。13. call_api(api_name: str,params: str) 调用指定的API并传入给定的参数。api_name是API的名称。params包含API所需的输入参数。例如,call_api(Amazon, open)意味着打开亚马逊APP。如果你发现当前指令无法在当前页面上执行,你需要输出no_answer。如果你发现当前指令已完成,你需要输出action_completed。"""
66
+
67
+ image_path = "./assets/test_action.png"
68
+
69
+ # 3. Build the message format
70
+ messages = [{"type": "image", "value":f"{image_path}",
71
+ {"type": "text", "value":f"{instruction}"]
72
+
73
+ # 4. Inference
74
+ response = model.generate(
75
+ message = messages,
76
+ )
77
+
78
+ print(response)
79
+ ```
80
+
81
+ Expected output:
82
+
83
+ ```JSON
84
+ {"tap(700,964)"}
85
+ ```
86
+
87
+ ### Action Space
88
+
89
+ At each step, the agent outputs is a single JSON object that contains:
90
+ - One (and only one) primitive action, chosen from the list below;
91
+ - Optional modifiers (`duration`, `thought`) and/or a task-level flag (`STATUS`).
92
+
93
+ Note that all keywords are **case-sensitive**, and we use **compact JSON** (i.e., no extra whitespace), which affects the tokenizer’s behavior.
94
+
95
+ <table>
96
+ <thead>
97
+ <tr>
98
+ <th>Action</th>
99
+ <th>Description</th>
100
+ <th>Conditions for R<sub>acc</sub> = +2</th>
101
+ <th>Example</th>
102
+ </tr>
103
+ </thead>
104
+ <tbody>
105
+ <tr>
106
+ <td><b>Tap</b></td>
107
+ <td>Click at coordinate (x, y)</td>
108
+ <td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%</td>
109
+ <td><code>tap(x,y)</code></td>
110
+ </tr>
111
+ <tr>
112
+ <td><b>Scroll</b></td>
113
+ <td>Scroll at coordinate (x, y) with<br>direction up / down / left / right</td>
114
+ <td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%<br>and direction = gt[direction]</td>
115
+ <td><code>scroll(x,y,direction)</code></td>
116
+ </tr>
117
+ <tr>
118
+ <td><b>Text Input</b></td>
119
+ <td>Type <i>text</i> at coordinate (x, y)</td>
120
+ <td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%<br>and F1(text, gt[text]) > 0.5</td>
121
+ <td><code>text(x,y,text_input)</code></td>
122
+ </tr>
123
+ <tr>
124
+ <td><b>Navigation Back</b></td>
125
+ <td>Adb command to go back to the previous page</td>
126
+ <td>–</td>
127
+ <td><code>navigate_back()</code></td>
128
+ </tr>
129
+ <tr>
130
+ <td><b>Navigation Home</b></td>
131
+ <td>Adb command to go to the home screen of the mobile</td>
132
+ <td>–</td>
133
+ <td><code>navigate_home()</code></td>
134
+ </tr>
135
+ <tr>
136
+ <td><b>Long Press</b></td>
137
+ <td>Long press at coordinate (x, y)</td>
138
+ <td>dist([x, y], [x<sub>c</sub>, y<sub>c</sub>]) ≤ 14%</td>
139
+ <td><code>long_press(x,y)</code></td>
140
+ </tr>
141
+ <tr>
142
+ <td><b>Finish</b></td>
143
+ <td>Indicate that navigation task has been completed</td>
144
+ <td>–</td>
145
+ <td><code>finish()</code></td>
146
+ </tr>
147
+ <tr>w
148
+ <td><b>Wait</b></td>
149
+ <td>Wait for several seconds</td>
150
+ <td>–</td>
151
+ <td><code>wait()</code></td>
152
+ </tr>
153
+ <tr>
154
+ <td><b>Enter</b></td>
155
+ <td>Adb command to press enter</td>
156
+ <td>–</td>
157
+ <td><code>enter()</code></td>
158
+ </tr>
159
+ <tr>
160
+ <td><b>Takeover</b></td>
161
+ <td>Request user takeover</td>
162
+ <td>–</td>
163
+ <td><code>take_over(message)</code></td>
164
+ </tr>
165
+ <tr>
166
+ <td><b>Drag</b></td>
167
+ <td>Drag from coordinate (x₁, y₁) to (x₂, y₂)</td>
168
+ <td>
169
+ dist([x₁, y₁], [x<sub>1c</sub>, y<sub>1c</sub>]) ≤ 7.5%<br>
170
+ and dist([x₂, y₂], [x<sub>2c</sub>, y<sub>2c</sub>]) ≤ 7.5%
171
+ </td>
172
+ <td><code>drag(x1,y1,x2,y2)</code></td>
173
+ </tr>
174
+ <tr>
175
+ <td><b>Call API</b></td>
176
+ <td>Adb command to <i>open</i> or <i>kill</i> app</td>
177
+ <td>app = gt[app]<br>and open/kill = gt[operation]</td>
178
+ <td><code>call_api(api_name,operation)</code></td>
179
+ </tr>
180
+ <tr>
181
+ <td><b>Screenshot</b></td>
182
+ <td>Adb command to take a screenshot</td>
183
+ <td>–</td>
184
+ <td><code>screen_shot()</code></td>
185
+ </tr>
186
+ <tr>
187
+ <td><b>Long Screenshot</b></td>
188
+ <td>Adb command to take a long screenshot</td>
189
+ <td>–</td>
190
+ <td><code>long_screen_shot()</code></td>
191
+ </tr>
192
+ </tbody>
193
+ </table>
194
+
195
+
196
+ ## Evaluation
197
+ ### 1.Data preparation
198
+ Please download the four compressed files from the [Magic-RICH dataset](https://huggingface.co/datasets/GUIAgent/Magic-RICH) and extract them into the .datasets/ directory.
199
+
200
+ - `assets/`
201
+ - `datasets/`
202
+ - `Routine`
203
+ - `Instruction`
204
+ - `Complex`
205
+ - `Handing_Exception`
206
+ - `utils/`
207
+
208
+ For the preparation of other open-source datasets, please refer to [Other datasets preparation](datasets/eval_data_process/readme.md).
209
+
210
+ ### 2. Param
211
+ We use run_eval.py for evaluation.
212
+
213
+ - `--data`: Name of a eval dataset
214
+ - `--model`: Path to the model
215
+ - `--work-dir (str, default to '.')`: Directory to save evaluation results
216
+ - `--mode (str, default: 'all', choices: ['all', 'infer'])`: If set to "all", the script performs both inference and evaluation; if set to "infer", it performs inference only.
217
+ - `--eval_model_path (str, default: 'None')`:'Path to eval model (required if mode is 'all' and data is 'ScreenQA-short')'
218
+
219
+ ### 3. Run
220
+ ```python
221
+ # Referring Benchmark
222
+ python run_eval.py --data ScreenQA-short --model MagicGUI_Path --mode all --eval_model_path Eval_Model_Path
223
+ python run_eval.py --data ScreenSpot_v2_mobile --model MagicGUI_Path --mode all
224
+ python run_eval.py --data Os-Atlas-mobile --model MagicGUI_Path --mode all
225
+ # Magic-RICH dataset
226
+ python run_eval.py --data Routine --model MagicGUI_Path --mode all
227
+ python run_eval.py --data Complex --model MagicGUI_Path --mode all
228
+ python run_eval.py --data Instruction --model MagicGUI_Path --mode all
229
+ python run_eval.py --data Handling_Exception --model MagicGUI_Path --mode all
230
+ # Open-source AndroidControl and GUI-Odyssey
231
+ python run_eval.py --data AC-Low --model MagicGUI_Path --mode all
232
+ python run_eval.py --data AC-High --model MagicGUI_Path --mode all
233
+ python run_eval.py --data GUI-Odyssey --model MagicGUI_Path --mode all
234
+ ```
235
+
236
+ ## Performance Evaluation
237
+
238
+ ### Performance comparison on the Referring Benchmark
239
+ <table>
240
+ <thead>
241
+ <tr>
242
+ <th rowspan="1">Agent Models</th>
243
+ <th colspan="1">ScreenQA-short</th>
244
+ <th colspan="1">ScreenSpot v2 mobile</th>
245
+ <th colspan="1">Os-Atlas-mobile</th>
246
+ </tr>
247
+ </thead>
248
+ <tbody>
249
+ <!-- Closed-source Models -->
250
+ <tr><td colspan="4"><em>Closed-source Models</em></td></tr>
251
+ <tr>
252
+ <td>GPT-4o (Hurst et al., 2024)</td>
253
+ <td>90.3</td><td>10.6</td><td>4.6</td>
254
+ </tr>
255
+ <tr>
256
+ <td>Gemini 2.0 (Pichai et al., 2024)</td>
257
+ <td>90.4</td><td>10.6</td><td>5.8</td>
258
+ </tr>
259
+ <!-- Open-source Models -->
260
+ <tr><td colspan="4"><em>Open-source Models</em></td></tr>
261
+ <tr>
262
+ <td>InternVL-2-8B (Chen et al., 2024)</td>
263
+ <td>88.4</td><td>4.2</td><td>2.4</td>
264
+ </tr>
265
+ <tr>
266
+ <td>Qwen2-VL-7B (Wang et al., 2024)</td>
267
+ <td>92.6</td><td>70.7</td><td>27.2</td>
268
+ </tr>
269
+ <tr>
270
+ <td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
271
+ <td>92.1</td><td>56.1</td><td>26.6</td>
272
+ </tr>
273
+ <tr>
274
+ <td>UI-TARS-7B (Qin et al., 2025)</td>
275
+ <td><b>95.4</b></td><td>88.6</td><td>82.5</td>
276
+ </tr>
277
+ <tr>
278
+ <td>UI-TARS-1.5-7B (Seed, 2025)</td>
279
+ <td>93.0</td><td>85.8</td><td>79.3</td>
280
+ </tr>
281
+ <!-- MagicGUI -->
282
+ <tr style="background-color:#e8eafc;">
283
+ <td>MagicGUI-CPT</td>
284
+ <td>94.6</td><td><b>90.2</b></td><td><b>95.2</b></td>
285
+ </tr>
286
+ </tbody>
287
+ </table>
288
+
289
+
290
+ ### Performance comparison on the Magic-RICH dataset
291
+
292
+ <table>
293
+ <thead>
294
+ <tr>
295
+ <th rowspan="2">Agent Models</th>
296
+ <th colspan="3">Routine</th>
297
+ <th colspan="3">Instruction</th>
298
+ <th colspan="3">Complex</th>
299
+ <th rowspan="2">Handing Exception</th>
300
+ </tr>
301
+ <tr>
302
+ <th>Type</th><th>Grd</th><th>SR</th>
303
+ <th>Type</th><th>Grd</th><th>SR</th>
304
+ <th>Type</th><th>Grd</th><th>SR</th>
305
+ </tr>
306
+ </thead>
307
+ <tbody>
308
+ <!-- Closed-source Models -->
309
+ <tr><td colspan="11"><em>Closed-source Models</em></td></tr>
310
+ <tr>
311
+ <td>GPT-4o (Hurst et al., 2024)</td>
312
+ <td>49.3</td><td>16.7</td><td>4.6</td>
313
+ <td>56.6</td><td>13.5</td><td>19.8</td>
314
+ <td>49.0</td><td>14.6</td><td>7.4</td>
315
+ <td>85.1</td>
316
+ </tr>
317
+ <tr>
318
+ <td>Gemini 2.0 (Pichai et al., 2024)</td>
319
+ <td>89.2</td><td>49.4</td><td>34.7</td>
320
+ <td>84.1</td><td>54.2</td><td>51.4</td>
321
+ <td>83.3</td><td>50.3</td><td>42.0</td>
322
+ <td>73.7</td>
323
+ </tr>
324
+ <!-- Open-source Models -->
325
+ <tr><td colspan="11"><em>Open-source Models</em></td></tr>
326
+ <tr>
327
+ <td>InternVL-2-8B (Chen et al., 2024)</td>
328
+ <td>30.1</td><td>2.8</td><td>1.3</td>
329
+ <td>37.1</td><td>4.0</td><td>15.8</td>
330
+ <td>17.1</td><td>6.0</td><td>1.3</td>
331
+ <td>70.8</td>
332
+ </tr>
333
+ <tr>
334
+ <td>Qwen2-VL-7B (Wang et al., 2024)</td>
335
+ <td>71.7</td><td>41.0</td><td>28.1</td>
336
+ <td>73.6</td><td>43.9</td><td>41.5</td>
337
+ <td>65.6</td><td>28.7</td><td>21.2</td>
338
+ <td>68.3</td>
339
+ </tr>
340
+ <tr>
341
+ <td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
342
+ <td>94.3</td><td>92.6</td><td>76.3</td>
343
+ <td>89.3</td><td><u>95.7</u></td><td>83.6</td>
344
+ <td>86.6</td><td>69.6</td><td>60.0</td>
345
+ <td>67.0</td>
346
+ </tr>
347
+ <tr>
348
+ <td>UI-TARS-7B (Qin et al., 2025)</td>
349
+ <td>83.5</td><td>84.9</td><td>73.3</td>
350
+ <td>76.6</td><td>85.6</td><td>69.8</td>
351
+ <td>91.4</td><td>69.1</td><td>67.0</td>
352
+ <td>3.6</td>
353
+ </tr>
354
+ <tr>
355
+ <td>UI-TARS-1.5-7B (Seed, 2025)</td>
356
+ <td>85.6</td><td>96.2</td><td>81.5</td>
357
+ <td>78.6</td><td>92.1</td><td>72.2</td>
358
+ <td><b>94.7</b></td><td>74.3</td><td>71.1</td>
359
+ <td>1.0</td>
360
+ </tr>
361
+ <tr>
362
+ <td>MiMo-VL-7B-SFT (Xiaomi, 2025)</td>
363
+ <td>93.0</td><td>77.9</td><td>65.3</td>
364
+ <td>89.7</td><td>85.7</td><td>75.4</td>
365
+ <td>89.1</td><td>80.1</td><td>71.0</td>
366
+ <td>57.0</td>
367
+ </tr>
368
+ <tr>
369
+ <td>AgentCPM-GUI (Zhang et al., 2025)</td>
370
+ <td>84.3</td><td>92.2</td><td>75.1</td>
371
+ <td>70.4</td><td>80.7</td><td>56.0</td>
372
+ <td>72.3</td><td>54.6</td><td>39.4</td>
373
+ <td>2.4</td>
374
+ </tr>
375
+ <!-- MagicGUI -->
376
+ <tr style="background-color:#e8eafc;">
377
+ <td>MagicGUI-CPT</td>
378
+ <td><b>98.5</b></td><td><b>98.5</b></td><td><b>97.2</b></td>
379
+ <td><b>95.5</b></td><td><b>96.3</b></td><td><b>92.9</b></td>
380
+ <td>88.5</td><td><b>82.3</b></td><td><b>72.9</b></td>
381
+ <td><b>93.2</b></td>
382
+ </tr>
383
+ <tr style="background-color:#e8eafc;">
384
+ <td>MagicGUI-RFT</td>
385
+ <td><b>99.7</b></td><td>97.5</td><td><b>97.5</b></td>
386
+ <td><b>97.2</b></td><td>95.6</td><td><b>94.0</b></td>
387
+ <td>92.1</td><td>80.4</td><td><b>74.1</b></td>
388
+ <td>92.1</td>
389
+ </tr>
390
+ </tbody>
391
+ </table>
392
+
393
+
394
+
395
+
396
+
397
+
398
+
399
+ ### Performance comparison on open-source AndroidControl and GUI-Odyssey datasets.
400
+
401
+ <table>
402
+ <thead>
403
+ <tr>
404
+ <th rowspan="2">Agent Models</th>
405
+ <th colspan="2">AC-Low</th>
406
+ <th colspan="2">AC-High</th>
407
+ <th colspan="2">GUI-Odyssey</th>
408
+ </tr>
409
+ <tr>
410
+ <th>Type</th><th>SR</th>
411
+ <th>Type</th><th>SR</th>
412
+ <th>Type</th><th>SR</th>
413
+ </tr>
414
+ </thead>
415
+ <tbody>
416
+ <!-- Closed-source Models -->
417
+ <tr><td colspan="7"><em>Closed-source Models</em></td></tr>
418
+ <tr>
419
+ <td>GPT-4o (Hurst et al., 2024)</td>
420
+ <td>-</td><td>19.5</td>
421
+ <td>-</td><td>20.8</td>
422
+ <td>-</td><td>20.4</td>
423
+ </tr>
424
+ <tr>
425
+ <td>Gemini 2.0 (Pichai et al., 2024)</td>
426
+ <td>-</td><td>28.5</td>
427
+ <td>-</td><td>60.2</td>
428
+ <td>-</td><td>3.3</td>
429
+ </tr>
430
+ <tr>
431
+ <td>Claude 2.0 (Anthropic, 2024)</td>
432
+ <td>-</td><td>28.5</td>
433
+ <td>-</td><td>12.5</td>
434
+ <td>60.9</td><td>-</td>
435
+ </tr>
436
+ <!-- Open-source Models -->
437
+ <tr><td colspan="7"><em>Open-source Models</em></td></tr>
438
+ <tr>
439
+ <td>Qwen2-VL-7B (Wang et al., 2024)</td>
440
+ <td>55.7</td><td>36.2</td>
441
+ <td>45.8</td><td>21.2</td>
442
+ <td>58.6</td><td>13.3</td>
443
+ </tr>
444
+ <tr>
445
+ <td>Qwen2.5-VL-7B (Bai et al., 2025)</td>
446
+ <td>94.1</td><td>85.0</td>
447
+ <td>75.1</td><td>62.9</td>
448
+ <td>59.5</td><td>46.3</td>
449
+ </tr>
450
+ <tr>
451
+ <td>Aguvis-7B (Xu et al., 2024)</td>
452
+ <td>93.9</td><td>89.4</td>
453
+ <td>65.6</td><td>54.2</td>
454
+ <td>26.7</td><td>13.5</td>
455
+ </tr>
456
+ <tr>
457
+ <td>OS-Atlas-7B (Wu et al., 2024)</td>
458
+ <td>73.0</td><td>67.3</td>
459
+ <td>70.4</td><td>56.5</td>
460
+ <td>91.8*</td><td>76.8*</td>
461
+ </tr>
462
+ <tr>
463
+ <td>UI-TARS-7B (Qin et al., 2025)</td>
464
+ <td>95.2</td><td>91.8</td>
465
+ <td>81.6</td><td>74.4</td>
466
+ <td>86.1</td><td>67.9</td>
467
+ </tr>
468
+ <tr>
469
+ <td>AgentCPM-GUI (Zhang et al., 2025)</td>
470
+ <td>94.4</td><td>90.2</td>
471
+ <td>77.7</td><td>69.2</td>
472
+ <td><b>90.9</b></td><td><b>75.0</b></td>
473
+ </tr>
474
+ <!-- MagicGUI -->
475
+ <tr style="background-color:#e8eafc;">
476
+ <td>MagicGUI-CPT</td>
477
+ <td>94.5</td><td>86.7</td>
478
+ <td>84.6</td><td>73.1</td>
479
+ <td><b>90.4</b></td><td>73.5</td>
480
+ </tr>
481
+ <tr style="background-color:#e8eafc;">
482
+ <td>MagicGUI-RFT</td>
483
+ <td><b>97.2</b></td><td><b>93.5</b></td>
484
+ <td><b>84.7</b></td><td><b>76.3</b></td>
485
+ <td>89.7</td><td><b>74.3</b></td>
486
+ </tr>
487
+ </tbody>
488
+ </table>
489
+
490
+ ## License
491
+
492
+ * This project is licensed under the [Apache-2.0](./LICENSE) license. The model weights are fully open for academic research, and commercial use licenses can be applied for by contacting [email protected]. This project uses the pre-trained Qwen2VL-7B-Instruct for initialization, which is also licensed under the Apache- 2.0 License.
493
+
494
+ ## Citation
495
+
496
+ If **MagicGUI** is useful for your research, please cite:
497
+
498
+ ```bibtex
499
+ @misc{tang2025magicguifoundationalmobilegui,
500
+ title={MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning},
501
+ author={Liujian Tang and Shaokang Dong and Yijia Huang and Minqi Xiang and Hongtao Ruan and Bin Wang and Shuo Li and Zhiheng Xi and Zhihui Cao and Hailiang Pang and Heng Kong and He Yang and Mingxu Chai and Zhilin Gao and Xingyu Liu and Yingnan Fu and Jiaming Liu and Xuanjing Huang and Yu-Gang Jiang and Tao Gui and Qi Zhang and Kang Wang and Yunke Zhang and Yuran Wang},
502
+ year={2025},
503
+ eprint={2508.03700},
504
+ archivePrefix={arXiv},
505
+ primaryClass={cs.HC},
506
+ url={https://arxiv.org/abs/2508.03700},
507
+ }
508
+ ```