nielsr HF Staff commited on
Commit
7078d7f
Β·
verified Β·
1 Parent(s): bb5c501

Improve model card: Update paper link and add paper introduction

Browse files

This PR enhances the model card for `OpenGVLab/ScaleCUA-3B` by:
- Adding an explicit introductory sentence that links directly to the paper ([ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform Data](https://huggingface.co/papers/2509.15221)) for improved discoverability.
- Correcting the "Paper" link in the quick-links block to point to the official Hugging Face paper page: `https://huggingface.co/papers/2509.15221`.

All existing usage examples, GitHub link, and license information remain unchanged.

Files changed (1) hide show
  1. README.md +19 -15
README.md CHANGED
@@ -1,28 +1,30 @@
1
  ---
2
- license: apache-2.0
 
3
  datasets:
4
  - OpenGVLab/ScaleCUA-Data
5
  language:
6
  - en
 
 
7
  metrics:
8
  - accuracy
9
- base_model:
10
- - Qwen/Qwen2.5-VL-3B-Instruct
11
  pipeline_tag: image-text-to-text
12
- library_name: transformers
13
  tags:
14
  - agent
15
  ---
16
 
17
  # SCALECUA: SCALING UP COMPUTER USE AGENTS WITH CROSS-PLATFORM DATA
18
 
19
- [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/ScaleCUA) [\[πŸ“œ Paper\]](https://github.com/OpenGVLab/ScaleCUA) [\[πŸš€ Quick Start\]](#model-loading)
 
 
20
 
21
 
22
 
23
  ## Introduction
24
 
25
- Recent advances in Vision-Language Models have enabled the development of agents capable of automating interactions with graphical user interfaces. Some computer use agents demonstrate strong performance, while they are typically built on closed-source models or inaccessible proprietary datasets. Moreover, the existing open-source datasets still remain insufficient for developing cross-platform general-purpose computer-use agents. To bridge this gap, we scale up the computer use dataset, constructed via a novel dual-loop interactive pipeline that combines an automated agent and a human expert into data collection. It spans **6 operating systems** and **3 task domains**, offering a large-scale and diverse corpus for training computer use agents.
26
  Building on this corpus, we develop **ScaleCUA**, capable of seamless operation across heterogeneous platforms. Trained on our dataset, it delivers consistent gains on several benchmarks, improving absolute success rates by **+26.6 points** on WebArena-Lite-v2 and **+10.7 points** on ScreenSpot-Pro compared to the baseline. Moreover, our ScaleCUA family achieves state-of-the-art performance across multiple benchmarks, e.g., **94.4%** on MMBench-GUI L1-Hard, **60.6%** on OSWorld-G and **47.4%** on WebArena-Lite-v2. These results highlight the effectiveness of our data-centric methodology in scaling both GUI understanding, grounding, and cross-platform task completion. We make our data, models, and code publicly available to facilitate future research: https://github.com/OpenGVLab/ScaleCUA.
27
 
28
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6502f241b1792803da7e8def/YdK0I790ehLAKpR1vGkX1.png)
@@ -315,7 +317,8 @@ Previous operations:
315
  def format_history(history):
316
  if len(history) > 0:
317
  actions_history = [f"Step {i+1}: {low_level}" for i, low_level in enumerate(history)]
318
- return "\n".join(actions_history)
 
319
  else:
320
  return None
321
 
@@ -382,7 +385,8 @@ def parse_response(response: str) -> Dict:
382
  if action_matches:
383
  for match in action_matches:
384
  # Split each match by newline and strip whitespace from each line
385
- lines = [line.strip() for line in match.split('\n') if line.strip()]
 
386
  actions.extend(lines)
387
  operation_match = re.search(r'<operation>\s*(.*?)\s*</operation>', response, re.DOTALL)
388
  operation = operation_match.group(1).strip() if operation_match else None
@@ -419,7 +423,7 @@ def parse_actions(self, actions):
419
  keys_keyword_match = re.search(r"keys\s*=\s*(.*)", args_str, re.DOTALL)
420
  if keys_keyword_match:
421
  keys_str = keys_keyword_match.group(1).strip()
422
- if (keys_str.startswith("'") and keys_str.endswith("'")) or \
423
  (keys_str.startswith('"') and keys_str.endswith('"')):
424
  keys_str = keys_str[1:-1]
425
  elif keys_str.startswith("[") and keys_str.endswith("]"):
@@ -428,7 +432,7 @@ def parse_actions(self, actions):
428
  keys = keys_str
429
  elif args_str:
430
  keys_str = args_str.strip()
431
- if (keys_str.startswith("'") and keys_str.endswith("'")) or \
432
  (keys_str.startswith('"') and keys_str.endswith('"')):
433
  keys_str = keys_str[1:-1]
434
  keys = keys_str
@@ -456,15 +460,15 @@ def parse_actions(self, actions):
456
 
457
  else:
458
  if "=" in args_str:
459
- for arg in re.finditer(r"(\w+)=\[([^\]]+)\]", args_str):
460
  param = arg.group(1)
461
  list_str = arg.group(2)
462
 
463
  list_items = []
464
- for item in re.finditer(r"'([^']*)'|\"([^\"]*)\"|([^,\]]+)", list_str):
465
  val = (item.group(1) or item.group(2) or item.group(3)).strip()
466
  if val:
467
- list_items.append(val.strip('"\''))
468
 
469
  args[param] = list_items
470
 
@@ -483,7 +487,7 @@ def parse_actions(self, actions):
483
  elif value_str.lower() in ("true", "false"):
484
  value = value_str.lower() == "true"
485
  else:
486
- value = value_str.strip('"\'')
487
 
488
  args[param] = value
489
 
@@ -493,7 +497,7 @@ def parse_actions(self, actions):
493
  for arg in re.finditer(r"'([^']*)'|\"([^\"]*)\"|([^,]+)", args_str):
494
  val = (arg.group(1) or arg.group(2) or arg.group(3)).strip()
495
  if val:
496
- args_list.append(val.strip('"\''))
497
 
498
  if args_list:
499
  args["args"] = args_list
 
1
  ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-VL-3B-Instruct
4
  datasets:
5
  - OpenGVLab/ScaleCUA-Data
6
  language:
7
  - en
8
+ library_name: transformers
9
+ license: apache-2.0
10
  metrics:
11
  - accuracy
 
 
12
  pipeline_tag: image-text-to-text
 
13
  tags:
14
  - agent
15
  ---
16
 
17
  # SCALECUA: SCALING UP COMPUTER USE AGENTS WITH CROSS-PLATFORM DATA
18
 
19
+ This model is part of the work presented in the paper [ScaleCUA: Scaling Open-Source Computer Use Agents with Cross-Platform Data](https://huggingface.co/papers/2509.15221).
20
+
21
+ [\[πŸ“‚ GitHub\]](https://github.com/OpenGVLab/ScaleCUA) [\[πŸ“œ Paper\]](https://huggingface.co/papers/2509.15221) [\[πŸš€ Quick Start\]](#model-loading)
22
 
23
 
24
 
25
  ## Introduction
26
 
27
+ Recent advances in Vision-Language Models (VLMs) have enabled the development of agents capable of automating interactions with graphical user interfaces. Some computer use agents demonstrate strong performance, while they are typically built on closed-source models or inaccessible proprietary datasets. Moreover, the existing open-source datasets still remain insufficient for developing cross-platform general-purpose computer-use agents. To bridge this gap, we scale up the computer use dataset, constructed via a novel dual-loop interactive pipeline that combines an automated agent and a human expert into data collection. It spans **6 operating systems** and **3 task domains**, offering a large-scale and diverse corpus for training computer use agents.
28
  Building on this corpus, we develop **ScaleCUA**, capable of seamless operation across heterogeneous platforms. Trained on our dataset, it delivers consistent gains on several benchmarks, improving absolute success rates by **+26.6 points** on WebArena-Lite-v2 and **+10.7 points** on ScreenSpot-Pro compared to the baseline. Moreover, our ScaleCUA family achieves state-of-the-art performance across multiple benchmarks, e.g., **94.4%** on MMBench-GUI L1-Hard, **60.6%** on OSWorld-G and **47.4%** on WebArena-Lite-v2. These results highlight the effectiveness of our data-centric methodology in scaling both GUI understanding, grounding, and cross-platform task completion. We make our data, models, and code publicly available to facilitate future research: https://github.com/OpenGVLab/ScaleCUA.
29
 
30
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6502f241b1792803da7e8def/YdK0I790ehLAKpR1vGkX1.png)
 
317
  def format_history(history):
318
  if len(history) > 0:
319
  actions_history = [f"Step {i+1}: {low_level}" for i, low_level in enumerate(history)]
320
+ return "
321
+ ".join(actions_history)
322
  else:
323
  return None
324
 
 
385
  if action_matches:
386
  for match in action_matches:
387
  # Split each match by newline and strip whitespace from each line
388
+ lines = [line.strip() for line in match.split('
389
+ ') if line.strip()]
390
  actions.extend(lines)
391
  operation_match = re.search(r'<operation>\s*(.*?)\s*</operation>', response, re.DOTALL)
392
  operation = operation_match.group(1).strip() if operation_match else None
 
423
  keys_keyword_match = re.search(r"keys\s*=\s*(.*)", args_str, re.DOTALL)
424
  if keys_keyword_match:
425
  keys_str = keys_keyword_match.group(1).strip()
426
+ if (keys_str.startswith("'") and keys_str.endswith("'")) or \\
427
  (keys_str.startswith('"') and keys_str.endswith('"')):
428
  keys_str = keys_str[1:-1]
429
  elif keys_str.startswith("[") and keys_str.endswith("]"):
 
432
  keys = keys_str
433
  elif args_str:
434
  keys_str = args_str.strip()
435
+ if (keys_str.startswith("'") and keys_str.endswith("'")) or \\
436
  (keys_str.startswith('"') and keys_str.endswith('"')):
437
  keys_str = keys_str[1:-1]
438
  keys = keys_str
 
460
 
461
  else:
462
  if "=" in args_str:
463
+ for arg in re.finditer(r"(\w+)=\\[([^\\]]+)\\]", args_str):
464
  param = arg.group(1)
465
  list_str = arg.group(2)
466
 
467
  list_items = []
468
+ for item in re.finditer(r"'([^']*)'|\"([^\"]*)\"|([^,\\]]+)", list_str):
469
  val = (item.group(1) or item.group(2) or item.group(3)).strip()
470
  if val:
471
+ list_items.append(val.strip('\"\''))
472
 
473
  args[param] = list_items
474
 
 
487
  elif value_str.lower() in ("true", "false"):
488
  value = value_str.lower() == "true"
489
  else:
490
+ value = value_str.strip('\"\'')
491
 
492
  args[param] = value
493
 
 
497
  for arg in re.finditer(r"'([^']*)'|\"([^\"]*)\"|([^,]+)", args_str):
498
  val = (arg.group(1) or arg.group(2) or arg.group(3)).strip()
499
  if val:
500
+ args_list.append(val.strip('\"\''))
501
 
502
  if args_list:
503
  args["args"] = args_list