Update README.md
Browse files
README.md
CHANGED
|
@@ -1,64 +1,77 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
The dataset
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- visual-question-answering
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- Image
|
| 9 |
+
- Text
|
| 10 |
+
- Reasoning
|
| 11 |
+
size_categories:
|
| 12 |
+
- 1K<n<10K
|
| 13 |
+
---
|
| 14 |
+
# Infinite Bench (Inf-Bench)
|
| 15 |
+
|
| 16 |
+
This repository contains the Infinite Bench (Inf-Bench), a novel evaluation framework for assessing the performance of Vision-Language Models (VLMs) in spatial deformation reasoning tasks.
|
| 17 |
+
|
| 18 |
+
## Files
|
| 19 |
+
|
| 20 |
+
The dataset is available as a parquet file on Hugging Face, ready for processing with HF Datasets. It can be loaded using the following code:
|
| 21 |
+
|
| 22 |
+
```python
|
| 23 |
+
from datasets import load_dataset
|
| 24 |
+
inf_bench = load_dataset("Chrishuanhuan/Inf-Bench")
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
## Dataset Description
|
| 28 |
+
|
| 29 |
+
Inf-Bench quantitatively evaluates the spatial deformation reasoning capabilities of Vision-Language Models (VLMs). Unlike existing spatial reasoning benchmarks that focus on tasks like "which is higher" or "find the shortest path" with fixed difficulty levels, Inf-Bench introduces questions requiring forward and reverse spatial deformation reasoning across 2D-3D contexts.
|
| 30 |
+
|
| 31 |
+
Our benchmark is designed for spatial deformation reasoning from 2D to 3D. Leveraging our data engine, we can generate an unlimited number of evaluation problem pairs with infinite steps, without any data leakage. This ensures that the benchmark remains challenging as models improve.
|
| 32 |
+
|
| 33 |
+
The dataset contains the following fields:
|
| 34 |
+
|
| 35 |
+
| Field Name | Description |
|
| 36 |
+
|---------------|--------------------------------------------------------------|
|
| 37 |
+
| id | Global index of the entry in the dataset |
|
| 38 |
+
| steps_number | Number of deformation steps in the problem |
|
| 39 |
+
| system_prompt | System prompt providing instructions for the task |
|
| 40 |
+
| user_prompt | Contains task input, including text and images |
|
| 41 |
+
| ground_truth | Ground truth answer for the question |
|
| 42 |
+
|
| 43 |
+
### Spatial Dimensions
|
| 44 |
+
|
| 45 |
+
Inf-Bench includes three levels of spatial reasoning tasks:
|
| 46 |
+
|
| 47 |
+
1. **2D Tasks**: Inspired by single-plane deformation in Shapez, these tasks evaluate the model's ability to recognize objects and perform shape transformations in a two-dimensional plane. Shapes are single-layer figures divided into four quadrants, each containing one of four predefined patterns or an "empty" state. Operations include cutting, rotating, and coloring.
|
| 48 |
+
|
| 49 |
+
2. **2.5D Tasks**: Extending the 2D task, this level adds a vertical stacking dimension, creating a multi-layer deformation challenge. Each planar figure is expanded into a stacked structure with up to four layers. Models must perform intra-layer transformations and reason about alignment across layers.
|
| 50 |
+
|
| 51 |
+
3. **3D Tasks**: Based on the Rubik's Cube, these tasks test the ability to manipulate and reason spatially in three dimensions. The Cube has 27 units, forming 54 visible faces that can change position and orientation through rotations.
|
| 52 |
+
|
| 53 |
+
### Task Types
|
| 54 |
+
|
| 55 |
+
For each spatial dimension, Inf-Bench includes two types of reasoning tasks:
|
| 56 |
+
|
| 57 |
+
1. **Forward Reasoning**: Given the operations, find the final state.
|
| 58 |
+
|
| 59 |
+
2. **Reverse Reasoning**: Given the final state, determine the operations needed to achieve it.
|
| 60 |
+
|
| 61 |
+
## Ladder Competition Framework
|
| 62 |
+
|
| 63 |
+
Inf-Bench adopts a ladder competition format to systematically evaluate models' performance. The key features are:
|
| 64 |
+
|
| 65 |
+
- Models begin at the lowest difficulty level (R = 1) with only one step deformation
|
| 66 |
+
- Each level consists of five questions
|
| 67 |
+
- If a model correctly answers at least 3 questions, it advances to a higher difficulty level
|
| 68 |
+
- If it answers fewer than 3 questions correctly, it remains at the same level and failure count increases
|
| 69 |
+
- If a model fails twice at the same level, the competition ends
|
| 70 |
+
|
| 71 |
+
The final value of R (reasoning depth) represents the model's spatial deformation reasoning capability and serves as the Inf-Bench metric.
|
| 72 |
+
|
| 73 |
+
## Evaluation
|
| 74 |
+
|
| 75 |
+
Our benchmark evaluation reveals that almost no current model demonstrates true spatial deformation reasoning abilities. Even after applying targeted classical training and mainstream reasoning enhancement methods, the models are unable to perform effective 3D spatial deformation reasoning.
|
| 76 |
+
|
| 77 |
+
We provide evaluation code in our GitHub repository, including the metrics implementation used in our framework.
|