Datasets:

Modalities:
Image
Languages:
English
ArXiv:
License:
ViC-Bench / README.md
JJasin's picture
Update README.md
c70596c verified
metadata
license: cc-by-nc-4.0
datasets:
  - meituan/ViC-Bench
language:
  - en
configs:
  - config_name: default
    data_files:
      - split: counting
        path: json/vicbench__counting_stage1.json
      - split: maze
        path: json/vicbench__maze_stage1.json
      - split: puzzle
        path: json/vicbench__puzzle_stage1.json
      - split: embodied
        path: json/vicbench__embodied_stage1.json

ViC-Bench

images

About ViC-Bench

Visual-Interleaved Chain-of-Thought (VI-CoT) enables MLLMs to continually update their understanding and decisions based on step-wise intermediate visual states (IVS), much like a human would, which demonstrates impressive success in various tasks, thereby leading to emerged advancements in related benchmarks. Despite promising progress, current benchmarks provide models with relatively fixed IVS, rather than free-style IVS, whch might forcibly distort the original thinking trajectories, failing to evaluate their intrinsic reasoning capabilities. More importantly, existing benchmarks neglect to systematically explore the impact factors that IVS would impart to untamed reasoning performance. To tackle above gaps, we introduce a specialized benchmark termed ViC-Bench, consisting of four representive tasks: maze navigation, jigsaw puzzle, embodied long-horizon planning, and complex counting, where each task has dedicated free-style IVS generation pipeline supporting function calls. To systematically examine VI-CoT capability, we propose a thorough evaluation suite incorporating a progressive three-stage strategy with targeted new metrics. Besides, we establish Incremental Prompting Information Injection (IPII) strategy to ablatively explore the prompting factors for VI-CoT. We extensively conduct evaluations for 18 advanced MLLMs, revealing key insights into their VI-CoT capability.

Data Construction

To evaluate the developments of recent VI-CoT methods, various benchmarks have emerged. Despite promising advancements, few of them provides free-style IVS representations to MLLMs, as illustrated in Tab. 1. CoMT primarily provides fixed IVS, which might forcibly distort the original planning trajectories. While MageBench offers the dynamic IVS but imposes the attribute constraints of action-observation memory. More importantly, existing benchmarks neglect to systematically assess the impact factors that IVS would impart to untamed reasoning performance in MLLMs. (i.e., Positive, Negative, or Null).

images

We adopted structured processing workflows that integrate images and visual states to support model decision-making across four tasks:

The diagram illustrates the data processing methods for four tasks, as follows:

  1. Maze Navigation: Screens mazes that meet the criteria through preprocessing, selects from an image pool, marks target areas, and processes through three stages. Intermediate visual states are provided during each processing stage to assist in identifying the correct path.

  2. Jigsaw Puzzle: Chooses suitable puzzle pieces from an image pool, conducts preprocessing, and marks puzzle target areas for processing within stages 2 and 3. Each processing stage provides function calls and intermediate visual states to guide task completion.

  3. Embodied Long-Horizon Planning: Carries out preprocessing to ensure data quality, followed by manual checks and restructuring operations in stage 1 for data preparation. Models plan step-by-step towards provided stage goals throughout the processing stages.

  4. Complex Counting: Utilizes image pool selection and complex counting preprocessing to set data. Tasks are processed through three stages, with intermediate visual states provided at each stage to assist the model in accurately counting the number of human heads in each area.

images

Evaluation

Tab. 2 displays that most MLLMs exhibit competent performance in Stage 1. Performance significantly drops in Stage 2, indicating that current MLLMs have limitations in open-ended spatial reasoning and perception. In Stage 3, with the supports of free-style VIS, all models consistently achieves gains in global-level ACC and fine-grained R_o, leading to impressive ThinkGain, which indicates the effectiveness of free-style IVS in tackling deficiencies of spatial-aware cognition.

images

Data Samples for Three Stages

Stage1

{
    "instanceId": 142353922,
    "prompt": "<image_1>You are a complex counting expert. The given input image exist numerous human heads and are divided into four areas named 1, 2, 3, 4 by irregular lines. In this task, you need to correctly count the number of human heads in each area sequentially from 1 to 4 and sum them up to determine the total number of heads in the given input image. Please select the most appropriate option you think from the provided four options. \nA. 44 \nB. 39 \nC. 34 \nD. 29",
    "target": "B",
    "images": {
        "<image_1>": "ViC-Bench/images/counting/2170.png"
    },
    "extra_data": {
        "options": [
            44,
            39,
            34,
            29
        ],
        "split": "(1, 9), (2, 11), (3, 11), (4, 8)"
    }
}

Stage2

{
    "instanceId": 142354430,
    "prompt": "<image_1>You are a complex counting expert. The given input image exist numerous human heads and are divided into four areas named 1, 2, 3, 4 by irregular lines. In this task, you need to correctly count the number of human heads in each area. The final answer format should be <Begin>(1, x), (2, x), (3, x), (4, x)</End>. For example, <Begin>(1, 10), (2, 14), (3, 21), (4, 23)</End>.",
    "target": "(1, 8), (2, 9), (3, 12), (4, 11)",
    "images": {
        "<image_1>": "ViC-Bench/images/counting/2882.png"
    },
    "extra_data": {
        "total": 40
    }
}

Stage3

{
    "instanceId": 142354469,
    "prompt": "<image_1>You are a complex counting expert. The given input image exist numerous human heads and are divided into four areas named {1, 2, 3, 4} by irregular lines. In this task, you need to correctly count the number of human heads in each area. Before making decision for each area, you can think, plan, and even reflect step by step, and then output your final judgement. The output decision format at each step should be <Begin> (x, y),</End>, where x denotes the area name (1, 2, 3, or 4) and y refers to head number. In addition, to assist you in making the final correct judgement, we will provide the intermediate visual state image after you make each decision. In the provided intermediate visual state image, the faces within specific areas are correctly removed by bounding box masks, which can help you verify the correctness of your previous judgment as well as offer a foundation for executing subsequent judgments. Note that you must make the final judgment only after we input at least one intermedicate visual state image. The final output format should be <Begin> (1, x), (2, x), (3, x), (4, x) </End>. For example, <Begin> (1, 10), (2, 14), (3, 21), (4, 23)  </End>.",
    "target": "(1, 7), (2, 6), (3, 9), (4, 6)",
    "images": {
        "<image_1>": "ViC-Bench/images/counting/1631.png"
    },
    "extra_data": {
        "step_images": [
            "ViC-Bench/images/counting/1631-mask-1.png",
            "ViC-Bench/images/counting/1631-mask-2.png",
            "ViC-Bench/images/counting/1631-mask-3.png",
            "ViC-Bench/images/counting/1631-mask-4.png"
        ],
        "total": 28
    }
}
  • instanceId: A distinctive identifier linked to this particular task instance.
  • prompt: The input prompt for the model, with serving as a placeholder for images.
  • target: The correct answer or expected result.
  • images: A reference to the relevant image file for the task, indicating the location of the image to be analyzed.
  • extra_data: Supplementary data related to the topic that can be utilized for metric calculations.

Incremental Prompting Information Injection (IPII)

SYS_PROMPTs = {
    "level1":"You are a maze navigation expert. "\
            "I will provide you with a 4 x 4 maze diagram, where the red lines represent maze boundaries or walls, indicating impassable areas, while the dark grey lines represent passable areas. "\
            "In this maze, you can only move once at each step, and you can only go left, right, up, or down. "\
            "Additionally, the diagram includes a starting point 'S' and an ending point 'E'. "\
            "In this task, you should carry out your own navigation planning and provide me with a final sequence of moves that can successfully reach the endpoint 'E' from the starting point 'S'. "\
            "Moreover, to assist you in making better judgments, I will provide you with the intermediate maze state diagram obtained after each move is executed. "\
            "For each step, please reply with only one specific move using the format <Begin>Go XX</End>, where XX can only be selected from Left, Right, Up, Down.",
    "level2":"You are a maze navigation expert. "\
            "I will provide you with a 4 x 4 maze diagram, where the red lines represent maze boundaries or walls, indicating impassable areas, while the dark grey lines represent passable areas. "\
            "In this maze, you can only move once at each step, and you can only go left, right, up, or down. "\
            "Additionally, the diagram includes a starting point 'S' and an ending point 'E'. "\
            "In this task, you should carry out your own navigation planning and provide me with a final sequence of moves that can successfully reach the endpoint 'E' from the starting point 'S'. "\
            "Please make sure that after executing the move at each step, you should envision your current position in the maze and update your internal intermediate visual state, rather than remaining in the initial input visual state. "\
            "Moreover, to assist you in making better judgments, I will provide you with the intermediate maze state diagram obtained after each move is executed. "\
            "For each step, please reply with only one specific move using the format <Begin>Go XX</End>, where XX can only be selected from Left, Right, Up, Down.",
    "level3":"You are a maze navigation expert. "\
            "I will provide you with a 4 x 4 maze diagram, where the red lines represent maze boundaries or walls, indicating impassable areas, while the dark grey lines represent passable areas. "\
            "In this maze, you can only move once at each step, and you can only go left, right, up, or down. "\
            "Additionally, the diagram includes a starting point 'S' and an ending point 'E'. "\
            "The coordinates of 'S' and 'E' are {origin} and {target}, where the first value represents the row index (0-3) and the second value represents the column index (0-3)."\
            "In this task, you should carry out your own navigation planning and provide me with a final sequence of moves that can successfully reach the endpoint 'E' from the starting point 'S'. "\
            "Please make sure that after executing the move at each step, you should envision your current position in the maze and update your internal intermediate visual state, rather than remaining in the initial input visual state. "\
            "Moreover, to assist you in making better judgments, I will provide you with the intermediate maze state diagram obtained after each move is executed. "\
            "For each step, please reply with only one specific move using the format <Begin>Go XX</End>, where XX can only be selected from Left, Right, Up, Down."
}

Citation

@misc{wu2025vicbenchbenchmarkingvisualinterleavedchainofthought,
      title={ViC-Bench: Benchmarking Visual-Interleaved Chain-of-Thought Capability in MLLMs with Free-Style Intermediate State Representations}, 
      author={Xuecheng Wu and Jiaxing Liu and Danlei Huang and Xiaoyu Li and Yifan Wang and Chen Chen and Liya Ma and Xuezhi Cao and Junxiao Xue},
      year={2025},
      eprint={2505.14404},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2505.14404}, 
}