ArunKr's picture
Update README.md
79cfe6f verified
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - agent

Supported Tasks

  • Natural Language → GUI Action Grounding Convert user instructions into JSON action objects.
  • Instruction Following Models learn to interpret varying natural language formulations (e.g., “press submit” vs “click the submit button”).
  • Multi-step UI Automation Some samples involve sequences of actions (e.g., open site → type → press Enter → screenshot).

Languages

  • English (en)
  • Generated with simple variations (synonyms, phrasings).

Dataset Structure

Data Format

Each entry is a JSON object with:

{
  "instruction": "Search Google for Python Playwright.",
  "actions": [
    {"action": "type", "target": "textarea[name=q]", "value": "Python Playwright"},
    {"action": "keypress", "options": {"key": "Enter"}}
  ]
}
  • instruction: natural language input.
  • actions: list of structured GUI actions in a standard schema.

JSON Schema for Actions

{
  "action": "string",
  "target": "string (CSS selector, text, XPath, etc.)",
  "value": "string (optional, e.g., input text, file path)",
  "options": { "key": "Enter", "button": "left", "count": 2, "direction": "down" }
}

Dataset Statistics

  • Size: 1,000 examples

  • Average actions per instruction: 1.7

  • Action types covered:

    • Click, double-click, right-click
    • Type, clear, keypress
    • Scroll, hover, drag-drop
    • Checkbox, radio, dropdown selection
    • Upload, download
    • Dialog handling (accept/dismiss)
    • Screenshot, highlight
    • Navigation (open_url, back, forward, refresh, switch_tab, resize)

Use Cases

  • Training GUI grounding agents (LLM-based or hybrid).
  • Creating instruction-tuned models for web automation.
  • Benchmarking natural language → structured action translation.
  • Bootstrapping RPA (Robotic Process Automation) agents with LLMs.

Generation Process

  • Synthetic data generated with templates + variations.
  • Actions derived from common web automation tasks (Google, YouTube, Gmail, Amazon, GitHub, Slack, Notion, Trello, etc.).
  • Covers both single-step (click, type) and multi-step (search + click + screenshot) workflows.

Limitations

  • Synthetic: No real human annotations.
  • Web-centric: Mostly web app actions, fewer desktop/native app actions.
  • Surface-level grounding: Uses simple selectors (text=Login, input#username) rather than pixel-perfect or accessibility trees.

Licensing

  • MIT License for dataset release.
  • Free for research & commercial use.
  • Attribution appreciated: “GUI Grounding Dataset (ArunKr, 2025)”.

Citation

If you use this dataset in your work, please cite:

@dataset{gui_grounding_2025,
  author    = {ArunKr},
  title     = {GUI Grounding Dataset},
  year      = {2025},
  url       = {https://huggingface.co/datasets/ArunKr/gui_grounding_dataset-1k},
  license   = {MIT}
}