Add comprehensive dataset card for MISS-QA

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - image-text-to-text
4
+ language:
5
+ - en
6
+ tags:
7
+ - question-answering
8
+ - multimodal
9
+ - scientific-literature
10
+ - schematic-diagrams
11
+ - benchmark
12
+ ---
13
+
14
+ # MISS-QA: A Multimodal Scientific Information-Seeking QA Benchmark
15
+
16
+ MISS-QA (Multimodal Information-Seeking over Scientific papers – Question Answering) is the first benchmark specifically designed to evaluate the ability of multimodal foundation models to interpret schematic diagrams and answer information-seeking questions within scientific literature.
17
+
18
+ This dataset is associated with the paper: [Can Multimodal Foundation Models Understand Schematic Diagrams? An Empirical Study on Information-Seeking QA over Scientific Papers](https://huggingface.co/papers/2507.10787).
19
+
20
+ Code: [https://github.com/QDRhhhh/MISSQA](https://github.com/QDRhhhh/MISSQA)
21
+
22
+ ## Highlights
23
+
24
+ - **1500 QA pairs** annotated by **expert researchers**.
25
+ - Covers **465 AI-related papers** from arXiv.
26
+ - Focuses on **schematic diagrams**, not just charts or tables.
27
+ - Evaluates **18 frontier vision-language models** (o4-mini, Gemini-2.5-Flash, and Qwen2.5-VL).
28
+ - Automatic evaluation protocol trained on **human-scored data**.
29
+
30
+ ## Benchmark Structure
31
+
32
+ Each example in MISS-QA includes:
33
+
34
+ - A **schematic diagram** from a scientific paper.
35
+ - A **highlighted visual element** (bounding box).
36
+ - A **free-form information-seeking question**.
37
+ - The corresponding **scientific context**.
38
+ - A human-annotated **answer** (or marked as unanswerable).
39
+
40
+ ### Information-Seeking Scenarios
41
+
42
+ - **Design Rationale**
43
+ - **Implementation Details**
44
+ - **Literature Background**
45
+ - **Experimental Results**
46
+ - **Other** (e.g., limitations, ethics)
47
+
48
+ ## Model Evaluation
49
+
50
+ MISS-QA is used to benchmark proprietary and open-source **multimodal foundation models**. Performance is automatically scored using a custom evaluation protocol aligned with human judgment.
51
+
52
+ ## How to Use
53
+
54
+ ### Step 0: Installation
55
+
56
+ ```bash
57
+ git clone https://github.com/QDRhhhh/MISSQA.git
58
+ cd MISSQA
59
+ conda create --name missqa python=3.10
60
+ conda activate missqa
61
+ pip install -r requirements.txt
62
+ ```
63
+
64
+ ### Step 1: Run Model Inference
65
+
66
+ Use the provided bash script to run inference with your multimodal model:
67
+
68
+ ```bash
69
+ bash scripts/vllm_large.sh
70
+ ```
71
+
72
+ This will generate model responses and save them to:
73
+
74
+ ```swift
75
+ ./outputs/
76
+ ```
77
+
78
+ ### Step 2: Evaluate Model Accuracy
79
+
80
+ Once inference is complete, run the accuracy evaluation script:
81
+
82
+ ```bash
83
+ python acc_evaluation.py
84
+ ```
85
+
86
+ The processed and scored outputs will be saved to:
87
+
88
+ ```swift
89
+ ./processed_outputs/
90
+ ```