nielsr HF Staff commited on
Commit
53064a7
Β·
verified Β·
1 Parent(s): 8e001a4

Enhance dataset card with detailed description, tags, and usage instructions

Browse files

This PR significantly expands the dataset card for the SpaCE-10 dataset to provide a comprehensive and user-friendly resource on the Hugging Face Hub.

Key updates include:
- **Expanded Description**: Detailed introduction to SpaCE-10, outlining its purpose as a benchmark for MLLMs in compositional spatial intelligence, its key features, and scope.
- **GitHub Repository Link**: Added a direct link to the official GitHub repository for code and further resources.
- **Visual Enhancements**: Integrated logo, teaser image, and performance leaderboards (including "Single-Choice vs. Double-Choice" and "Capability Score Ranking") directly from the GitHub README, with corrected image paths for display on the Hub.
- **Practical Guides**: Included sections for "News," "Environment" setup, and "Evaluation" instructions to help users get started efficiently.
- **Sample Usage**: Added a Python code snippet demonstrating how to load the dataset using the `datasets` library, facilitating quick adoption.
- **Metadata Tags**: Added `multimodal`, `benchmark`, `spatial-reasoning`, and `indoor-scenes` tags to the YAML metadata for improved discoverability and categorization within the Hugging Face ecosystem.
- **Citation**: Ensured the academic citation is included for proper attribution.

These additions aim to make the SpaCE-10 dataset card a complete reference for researchers and practitioners.

Files changed (1) hide show
  1. README.md +111 -1
README.md CHANGED
@@ -2,6 +2,116 @@
2
  license: mit
3
  task_categories:
4
  - image-text-to-text
 
 
 
 
 
5
  ---
6
 
7
- This repository contains the dataset for the paper [SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence](https://huggingface.co/papers/2506.07966).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  task_categories:
4
  - image-text-to-text
5
+ tags:
6
+ - multimodal
7
+ - benchmark
8
+ - spatial-reasoning
9
+ - indoor-scenes
10
  ---
11
 
12
+ This repository contains the dataset for the paper [SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence](https://huggingface.co/papers/2506.07966).
13
+
14
+ <div align="center">
15
+ <h1><img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/space-10-logo.png" width="8%"> SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence</h1>
16
+ </div>
17
+
18
+ **GitHub Repository:** [https://github.com/Cuzyoung/SpaCE-10](https://github.com/Cuzyoung/SpaCE-10)
19
+
20
+ ---
21
+ # 🧠 What is SpaCE-10?
22
+
23
+ **SpaCE-10** is a **compositional spatial intelligence benchmark** for evaluating **Multimodal Large Language Models (MLLMs)** in indoor environments. Our contribution as follows:
24
+
25
+ - 🧬 We define an **Atomic Capability Pool**, proposing 10 **atomic spatial capabilities.**
26
+ - πŸ”— Based on the composition of different atomic capabilities, we design **8 compositional QA types**.
27
+ - πŸ“ˆ SpaCE-10 benchmark contains 5,000+ QA pairs.
28
+ - 🏠 All QA pairs come from 811 indoor scenes (ScanNet++, ScanNet, 3RScan, ARKitScene)
29
+ - 🌍 SpaCE-10 spans both 2D and 3D MLLM evaluations and can be seamlessly adapted to MLLMs that accept 3D scan input.
30
+
31
+ <div align="center">
32
+ <br><br>
33
+ <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/space-10-teaser.png" width="100%">
34
+ <br><br>
35
+ </div>
36
+
37
+ ---
38
+ # πŸ”₯πŸ”₯πŸ”₯ News
39
+ - [2025/07/12] Adjust some QAs of Space-10 and update RemyxAI models' performance to leader board.
40
+ - [2025/06/11] Scans for 3D MLLMs and our manually collected 3D snapshots will be coming soon.
41
+ - [2025/06/10] Evaluation code is released at followings.
42
+ - [2025/06/09] We have released the benchmark for 2D MLLMs at [Hugging Face](https://huggingface.co/datasets/Cusyoung/SpaCE-10).
43
+ - [2025/06/09] The paper of SpaCE-10 is released at [Arxiv](https://arxiv.org/abs/2506.07966v1)!
44
+ ---
45
+
46
+ # Performance Leader Board - Single-Choice
47
+ πŸŽ‰ LLaVA-OneVision-72B achieves the Rank 1 in all tested models.
48
+
49
+ πŸŽ‰ GPT-4o achieves the best score in tested Close-Source models.
50
+
51
+ A large gap still exists between human and models in compositional spatial intelligence.
52
+
53
+ <div align="center">
54
+ <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/Perfomance_Leader_Board.png" width="100%">
55
+ <br>
56
+ </div>
57
+
58
+ # Single-Choice vs. Double-Choice
59
+ <div align="center">
60
+ <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/single-double.png" width="100%">
61
+ <br>
62
+ </div>
63
+
64
+ # Capability Score Ranking - Single-Choice
65
+ <div align="center">
66
+ <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/Capability_Score_Matrix.png" width="100%">
67
+ <br>
68
+ </div>
69
+
70
+ # Environment
71
+ The evaluation of SpaCE-10 is based on lmms-eval. Thus, we follow the environment settings of lmms-eval.
72
+ ```bash
73
+ git clone https://github.com/Cuzyoung/SpaCE-10.git
74
+ cd SpaCE-10
75
+ uv venv dev --python=3.10
76
+ source dev/bin/activate
77
+ uv pip install -e .
78
+ ```
79
+
80
+ # Evaluation
81
+ Take InternVL2.5-8B as an example:
82
+ ```bash
83
+ cd lmms-eval/run_bash
84
+ bash internvl2.5-8b.sh
85
+ ```
86
+ Notably, each time we test a new model, the corresponding environment of this model needs to be installed.
87
+
88
+ ---
89
+ # Sample Usage
90
+
91
+ You can load the dataset using the Hugging Face `datasets` library:
92
+
93
+ ```python
94
+ from datasets import load_dataset
95
+
96
+ dataset = load_dataset("Cusyoung/SpaCE-10")
97
+
98
+ # To explore the dataset splits:
99
+ print(dataset)
100
+
101
+ # Example of accessing a split (assuming a 'train' split exists):
102
+ # train_split = dataset["train"]
103
+ # print(train_split[0])
104
+ ```
105
+
106
+ ---
107
+ # Citation
108
+ If you use this dataset, please cite the original paper:
109
+
110
+ ```bibtex
111
+ @article{gong2025space10,
112
+ title={SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence},
113
+ author={Ziyang Gong, Wenhao Li, Oliver Ma, Songyuan Li, Jiayi Ji, Xue Yang, Gen Luo, Junchi Yan, Rongrong Ji},
114
+ journal={arXiv preprint arXiv:2506.07966},
115
+ year={2025}
116
+ }
117
+ ```