Chrishuanhuan commited on
Commit
61984df
·
verified ·
1 Parent(s): c171762

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -64
README.md CHANGED
@@ -1,64 +1,77 @@
1
- # Infinite Bench (Inf-Bench)
2
-
3
- This repository contains the Infinite Bench (Inf-Bench), a novel evaluation framework for assessing the performance of Vision-Language Models (VLMs) in spatial deformation reasoning tasks.
4
-
5
- ## Files
6
-
7
- The dataset is available as a parquet file on Hugging Face, ready for processing with HF Datasets. It can be loaded using the following code:
8
-
9
- ```python
10
- from datasets import load_dataset
11
- inf_bench = load_dataset("Chrishuanhuan/Inf-Bench")
12
- ```
13
-
14
- ## Dataset Description
15
-
16
- Inf-Bench quantitatively evaluates the spatial deformation reasoning capabilities of Vision-Language Models (VLMs). Unlike existing spatial reasoning benchmarks that focus on tasks like "which is higher" or "find the shortest path" with fixed difficulty levels, Inf-Bench introduces questions requiring forward and reverse spatial deformation reasoning across 2D-3D contexts.
17
-
18
- Our benchmark is designed for spatial deformation reasoning from 2D to 3D. Leveraging our data engine, we can generate an unlimited number of evaluation problem pairs with infinite steps, without any data leakage. This ensures that the benchmark remains challenging as models improve.
19
-
20
- The dataset contains the following fields:
21
-
22
- | Field Name | Description |
23
- |---------------|--------------------------------------------------------------|
24
- | id | Global index of the entry in the dataset |
25
- | steps_number | Number of deformation steps in the problem |
26
- | system_prompt | System prompt providing instructions for the task |
27
- | user_prompt | Contains task input, including text and images |
28
- | ground_truth | Ground truth answer for the question |
29
-
30
- ### Spatial Dimensions
31
-
32
- Inf-Bench includes three levels of spatial reasoning tasks:
33
-
34
- 1. **2D Tasks**: Inspired by single-plane deformation in Shapez, these tasks evaluate the model's ability to recognize objects and perform shape transformations in a two-dimensional plane. Shapes are single-layer figures divided into four quadrants, each containing one of four predefined patterns or an "empty" state. Operations include cutting, rotating, and coloring.
35
-
36
- 2. **2.5D Tasks**: Extending the 2D task, this level adds a vertical stacking dimension, creating a multi-layer deformation challenge. Each planar figure is expanded into a stacked structure with up to four layers. Models must perform intra-layer transformations and reason about alignment across layers.
37
-
38
- 3. **3D Tasks**: Based on the Rubik's Cube, these tasks test the ability to manipulate and reason spatially in three dimensions. The Cube has 27 units, forming 54 visible faces that can change position and orientation through rotations.
39
-
40
- ### Task Types
41
-
42
- For each spatial dimension, Inf-Bench includes two types of reasoning tasks:
43
-
44
- 1. **Forward Reasoning**: Given the operations, find the final state.
45
-
46
- 2. **Reverse Reasoning**: Given the final state, determine the operations needed to achieve it.
47
-
48
- ## Ladder Competition Framework
49
-
50
- Inf-Bench adopts a ladder competition format to systematically evaluate models' performance. The key features are:
51
-
52
- - Models begin at the lowest difficulty level (R = 1) with only one step deformation
53
- - Each level consists of five questions
54
- - If a model correctly answers at least 3 questions, it advances to a higher difficulty level
55
- - If it answers fewer than 3 questions correctly, it remains at the same level and failure count increases
56
- - If a model fails twice at the same level, the competition ends
57
-
58
- The final value of R (reasoning depth) represents the model's spatial deformation reasoning capability and serves as the Inf-Bench metric.
59
-
60
- ## Evaluation
61
-
62
- Our benchmark evaluation reveals that almost no current model demonstrates true spatial deformation reasoning abilities. Even after applying targeted classical training and mainstream reasoning enhancement methods, the models are unable to perform effective 3D spatial deformation reasoning.
63
-
64
- We provide evaluation code in our GitHub repository, including the metrics implementation used in our framework.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - visual-question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - Image
9
+ - Text
10
+ - Reasoning
11
+ size_categories:
12
+ - 1K<n<10K
13
+ ---
14
+ # Infinite Bench (Inf-Bench)
15
+
16
+ This repository contains the Infinite Bench (Inf-Bench), a novel evaluation framework for assessing the performance of Vision-Language Models (VLMs) in spatial deformation reasoning tasks.
17
+
18
+ ## Files
19
+
20
+ The dataset is available as a parquet file on Hugging Face, ready for processing with HF Datasets. It can be loaded using the following code:
21
+
22
+ ```python
23
+ from datasets import load_dataset
24
+ inf_bench = load_dataset("Chrishuanhuan/Inf-Bench")
25
+ ```
26
+
27
+ ## Dataset Description
28
+
29
+ Inf-Bench quantitatively evaluates the spatial deformation reasoning capabilities of Vision-Language Models (VLMs). Unlike existing spatial reasoning benchmarks that focus on tasks like "which is higher" or "find the shortest path" with fixed difficulty levels, Inf-Bench introduces questions requiring forward and reverse spatial deformation reasoning across 2D-3D contexts.
30
+
31
+ Our benchmark is designed for spatial deformation reasoning from 2D to 3D. Leveraging our data engine, we can generate an unlimited number of evaluation problem pairs with infinite steps, without any data leakage. This ensures that the benchmark remains challenging as models improve.
32
+
33
+ The dataset contains the following fields:
34
+
35
+ | Field Name | Description |
36
+ |---------------|--------------------------------------------------------------|
37
+ | id | Global index of the entry in the dataset |
38
+ | steps_number | Number of deformation steps in the problem |
39
+ | system_prompt | System prompt providing instructions for the task |
40
+ | user_prompt | Contains task input, including text and images |
41
+ | ground_truth | Ground truth answer for the question |
42
+
43
+ ### Spatial Dimensions
44
+
45
+ Inf-Bench includes three levels of spatial reasoning tasks:
46
+
47
+ 1. **2D Tasks**: Inspired by single-plane deformation in Shapez, these tasks evaluate the model's ability to recognize objects and perform shape transformations in a two-dimensional plane. Shapes are single-layer figures divided into four quadrants, each containing one of four predefined patterns or an "empty" state. Operations include cutting, rotating, and coloring.
48
+
49
+ 2. **2.5D Tasks**: Extending the 2D task, this level adds a vertical stacking dimension, creating a multi-layer deformation challenge. Each planar figure is expanded into a stacked structure with up to four layers. Models must perform intra-layer transformations and reason about alignment across layers.
50
+
51
+ 3. **3D Tasks**: Based on the Rubik's Cube, these tasks test the ability to manipulate and reason spatially in three dimensions. The Cube has 27 units, forming 54 visible faces that can change position and orientation through rotations.
52
+
53
+ ### Task Types
54
+
55
+ For each spatial dimension, Inf-Bench includes two types of reasoning tasks:
56
+
57
+ 1. **Forward Reasoning**: Given the operations, find the final state.
58
+
59
+ 2. **Reverse Reasoning**: Given the final state, determine the operations needed to achieve it.
60
+
61
+ ## Ladder Competition Framework
62
+
63
+ Inf-Bench adopts a ladder competition format to systematically evaluate models' performance. The key features are:
64
+
65
+ - Models begin at the lowest difficulty level (R = 1) with only one step deformation
66
+ - Each level consists of five questions
67
+ - If a model correctly answers at least 3 questions, it advances to a higher difficulty level
68
+ - If it answers fewer than 3 questions correctly, it remains at the same level and failure count increases
69
+ - If a model fails twice at the same level, the competition ends
70
+
71
+ The final value of R (reasoning depth) represents the model's spatial deformation reasoning capability and serves as the Inf-Bench metric.
72
+
73
+ ## Evaluation
74
+
75
+ Our benchmark evaluation reveals that almost no current model demonstrates true spatial deformation reasoning abilities. Even after applying targeted classical training and mainstream reasoning enhancement methods, the models are unable to perform effective 3D spatial deformation reasoning.
76
+
77
+ We provide evaluation code in our GitHub repository, including the metrics implementation used in our framework.