File size: 21,231 Bytes
73fa4cd 8ba5268 73fa4cd 8ba5268 73fa4cd 8ba5268 73fa4cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 |
## Dataset Description
SCORE-Bench is a curated collection of 224 diverse, real-world documents manually annotated by experts. It is designed to benchmark document parsing systems against true production-grade challenges. Unlike traditional academic datasets often composed of clean, digital-native PDFs, this benchmark specifically targets the complexity found in actual enterprise workflows.
Note on replication: This dataset is a standalone benchmark released after the publication of the original SCORE framework paper. It is not the exact dataset used in the paper's experiments. Researchers should view this as a new, more challenging evaluation set using the same methodology.
The dataset allows researchers and developers to move beyond "clean" evaluation to test how systems handle the irregularities of the real world. It includes:
* **Complex Layouts:** Financial reports with deeply nested tables, technical manuals with multi-column dense text, and articles where whitespace (rather than lines) defines structure.
* **Visual Noise & Variety:** Scanned forms with skew, photocopied documents with artifacts, and forms containing mixed printed and handwritten text.
* **Semantic Ambiguity:** Documents selected to break brittle systems, requiring parsers to distinguish between varying structural interpretations (e.g., identifying a two-column article versus a list of key-value pairs).
Every document in SCORE-Bench has been manually annotated by domain experts, not algorithmically generated from metadata.
## Dataset Coverage
**Distribution of document layout characteristics**
Each document typically presents more than one of the characteristics:
| Document characteristic | Count |
| :---- |:------|
| Scanned documents | 54 |
| Documents with noise and visual degradation | 39 |
| Multi-column layout | 98 |
| Flowing text blocks | 143 |
| Complex layout | 127 |
| Simple tables | 40 |
| Complex tables with merged cells | 48 |
| Embedded images or plots | 81 |
| Forms | 54 |
| Handwriting mixed with printed text | 33 |
| Layout with complex visual branding | 114 |
**Document content types**
The dataset captures the heterogeneity of real-world unstructured data not only across verticals, but also across document types. It includes operational and regulatory content: government reports, financial statements, legal agreements, insurance forms, and technical manuals alongside lower-frequency but operationally critical artifacts such as patent documents, research papers, curriculum vitae, marketing collateral, schematics, and more.
This breadth ensures that evaluation can be done on content representative of real-world enterprise workflows: complex unstructured documents that span both common and occasional niche types. By incorporating this long tail of document types, the dataset reflects the diversity and functional richness encountered in actual organizational settings, providing a realistic benchmark for document parsing.
## Annotation Format
The dataset uses the following types of ground truth data:
1. **Text Content Ground Truth**: Content is structured with markers for different document elements, enabling evaluation against a clean concatenated text representation (CCT).
```
--------------------------------------------------- Unstructured Plain Text Format 1
--------------------------------------------------- Unstructured Title Begin
DOCUMENT TITLE
--------------------------------------------------- Unstructured Title End
--------------------------------------------------- Unstructured NarrativeText Begin
Document content...
--------------------------------------------------- Unstructured NarrativeText End
```
2. **Table Ground Truth**: Tables are represented as JSON with cell coordinates and content, serving as the ground truth for our format-agnostic table evaluation
```json
[
{
"type": "Table",
"text": [
{
"id": "cell-id",
"x": 0,
"y": 0,
"w": 1,
"h": 1,
"content": "Cell content"
},
...
]
}
]
```
## Intended Usage
This dataset is designed to serve as a standardized benchmark for evaluating modern document parsing systems. Its composition specifically addresses the limitations of traditional metrics when applied to generative models. The intended use cases include:
* **Fair Benchmarking of Generative Systems**: The dataset intentionally contains layouts with multiple valid structural interpretations. The annotations are constructed to allow the SCORE system to evaluate based on semantics, ensuring that Vision Language Models (VLMs) are not penalized for legitimate interpretive flexibility (e.g. distinct but semantically equivalent readings of a complex page).
* **Format-Agnostic Comparison**: The ground truth allows for the comparison of outputs across varying representational formats (e.g., HTML, JSON, flattened text) by validating semantic equivalence rather than rigid string-level or tree-level matching.
* **Granular Error Analysis:** The variety of noise and document types enables the SCORE framework to identify specific system behaviors, such as distinguishing between content hallucinations (spurious tokens) and content omissions.
* **Complex Table Evaluation:** The data includes tables with ambiguous structures, merged cells, and irregular layouts to test extraction capabilities. This supports evaluation that separates content accuracy from index/spatial accuracy.
* **Structural Hierarchy Assessment:** The documents are selected to challenge a system's ability to maintain consistent, semantically coherent hierarchies (e.g., mapping headers and list items correctly) across long or complex pages.
## Evaluations
Measured on Nov 24th 2025.
**Content Fidelity Metrics**
| | cct | adjusted\_cct | percent\_tokens\_found | percent\_tokens\_added |
| :---- |:----------| :---- |:-----------------------|:-----------------------|
| Snowflake Layout Mode | 0.782 | 0.792 | 0.823 | 0.102 |
| Snowflake OCR Mode | 0.705 | 0.705 | 0.900 | 0.048 |
| Databricks AI Parse Document | 0.795 | 0.809 | 0.840 | 0.053 |
| LlamaParse High Resolution OCR | 0.761 | 0.776 | 0.826 | 0.055 |
| LlamaParse VLM | 0.827 | 0.835 | 0.890 | 0.069 |
| Reducto Agentic | 0.811 | 0.812 | 0.937 | 0.124 |
| Unstructured High-Res Refined with GPT-5-mini | 0.855 | 0.857 | 0.909 | 0.069 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.862 | 0.863 | 0.911 | 0.057 |
| Docling Default | 0.702 | 0.716 | 0.720 | 0.135 |
| Unstructured VLM Partitioner GPT-5-mini | 0.885 | 0.883 | 0.924 | 0.036 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.857 | 0.864 | 0.914 | 0.043 |
| Unstructured OSS | 0.707 | 0.715 | 0.876 | 0.119 |
| NVIDIA Nemotron-Parse-v1.1 | 0.625 | 0.648 | 0.737 | 0.070 |
| Docling Granite VLM | 0.587 | 0.625 | 0.644 | 0.163 |
**Table Extraction Metrics**
| | detection\_f | cell\_level\_index\_acc | cell\_content\_acc | shifted\_cell\_content\_acc | page\_teds\_corrected | table\_teds | table\_teds\_corrected |
| :---- | :---- |:------------------------|:-------------------| :---- | :---- | :---- | :---- |
| Snowflake Layout Mode | 0.841 | 0.583 | 0.556 | 0.589 | 0.57 | 0.589 | 0.55 |
| Snowflake OCR Mode | 0.545 | N/A | N/A | N/A | N/A | N/A | N/A |
| Databricks AI Parse Document | 0.826 | 0.623 | 0.615 | 0.653 | 0.663 | 0.657 | 0.631 |
| LlamaParse High Resolution OCR | 0.704 | 0.409 | 0.361 | 0.422 | 0.49 | 0.452 | 0.42 |
| LlamaParse VLM | 0.802 | 0.578 | 0.522 | 0.564 | 0.64 | 0.599 | 0.567 |
| Reducto Agentic | 0.854 | 0.706 | 0.708 | 0.742 | 0.772 | 0.775 | 0.75 |
| Unstructured High-Res Refined with GPT-5-mini | 0.85 | 0.774 | 0.76 | 0.782 | 0.778 | 0.796 | 0.776 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.855 | 0.776 | 0.773 | 0.813 | 0.782 | 0.803 | 0.779 |
| Docling Default | 0.815 | 0.659 | 0.606 | 0.628 | 0.679 | 0.67 | 0.65 |
| Unstructured VLM Partitioner GPT-5-mini | 0.837 | 0.734 | 0.69 | 0.731 | 0.757 | 0.743 | 0.722 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.855 | 0.656 | 0.65 | 0.683 | 0.714 | 0.708 | 0.678 |
| Unstructured OSS | 0.839 | 0.498 | 0.426 | 0.475 | 0.47 | 0.492 | 0.449 |
| NVIDIA Nemotron-Parse-v1.1 | 0.715 | 0.651 | 0.559 | 0.589 | 0.583 | 0.613 | 0.567 |
| Docling Granite VLM | 0.725 | 0.716 | 0.657 | 0.694 | 0.673 | 0.72 | 0.687 |
**Structural Understanding Metrics**
| pipeline | element\_alignment |
|:-------------------------------------------------------------------| ----- |
| Snowflake Layout Mode | 0.608 |
| Snowflake OCR Mode | N/A |
| Databricks AI Parse Document | 0.417 |
| LlamaParse High Resolution OCR | 0.277 |
| LlamaParse VLM | 0.266 |
| Reducto Agentic | 0.595 |
| Unstructured High-Res Refined with GPT-5-mini | 0.58 |
| Unstructured High-Res Refined with Claude Sonnet 4 | 0.58 |
| Docling Default | 0.599 |
| Unstructured OSS | 0.534 |
| NVIDIA Nemotron-Parse-v1.1 | 0.339 |
| Docling Granite VLM | 0.558 |
| Unstructured VLM Partitioner Claude Sonnet 4 | 0.598 |
| Unstructured VLM Partitioner GPT-5-mini | 0.575 |
## Dataset Creation Date
Nov 24, 2025
## SCORE-Bench β Licensing & Attribution
This repository contains:
- Third-party PDF documents used as part of the SCORE-Bench document parsing benchmark
- Unstructured-authored annotations and metadata
This following summarizes licensing and attribution requirements for both.
---
### License for Unstructured-authored annotations and metadata
Except where otherwise noted, **Unstructured-authored annotation files, labels, and metadata** in this repository are licensed under:
**Creative Commons Attribution 4.0 International (CC BY 4.0)**
<https://creativecommons.org/licenses/by/4.0/>
This license allows reuse, modification, and redistribution including commercial use, provided appropriate credit is given.
A suggested attribution format is:
> βSCORE-Bench annotations Β© Unstructured Technologies, licensed under CC BY 4.0.β
If you publish work based on this dataset, you may also wish to cite the SCORE framework / SCORE-Bench paper.
> **Important:** The CC BY 4.0 license above applies **only** to Unstructured-created content (annotations, metadata, and documentation). The third-party PDFs listed below remain under their original licenses and all disclaimers and non-endorsement statements contained within the original documents continue to apply.
---
### Third-party PDFs
For each work we list:
- **Files** β filenames used in this dataset
- **Citation** β the original work to credit and copyright notice, if applicable
- **License** β the license under which the work is shared
- **Source** β a DOI or canonical URL
Where works list many authors, we abbreviate the author list with βet al.β; see the linked source for the full details.
---
#### 1. FAO, Rikolto and RUAF β *Urban and peri-urban agriculture sourcebook β From production to food systems*
**Files:**
- `cb9722en_p35-36-p001.pdf`
- `cb9722en_p35-36-p002.pdf`
**Citation:**
FAO, Rikolto and RUAF. 2022. *Urban and peri-urban agriculture sourcebook β From production to food systems*. Rome, FAO and Rikolto.
Β© FAO, 2022
**License:**
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO (CC BY-NC-SA 3.0 IGO) β <https://creativecommons.org/licenses/by-nc-sa/3.0/igo/legalcode>
**Source:**
<https://doi.org/10.4060/cb9722en>
---
#### 2. World Health Organization β *Global strategy on digital health 2020β2025*
**Files:**
- `gs4dhdStrategicObjectives-p008.pdf`
- `gs4dhdStrategicObjectives-p009.pdf`
**Citation:**
World Health Organization. 2021. *Global strategy on digital health 2020β2025*. Geneva: World Health Organization.
Β© World Health Organization 2021
**License:**
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 IGO (CC BY-NC-SA 3.0 IGO) β <https://creativecommons.org/licenses/by-nc-sa/3.0/igo>
**Source:**
<https://www.who.int/docs/default-source/documents/gs4dhdaa2a9f352b0445bafbc79ca799dce4d.pdf>
---
#### 3. Park et al. β *Korean Power System Challenges and Opportunities, Priorities for Swift and Successful Clean Energy Deployment at Scale*
**Files:**
- `korean_power_system_challenges-p001.pdf`
- `korean_power_system_challenges-p003.pdf`
**Citation:**
Park, W. Y, Khanna, N., Kim, J. H., et al. (2023) *Korean Power System Challenges and Opportunities, Priorities for Swift and Successful Clean Energy Deployment at Scale*.
Copyright Notice: This manuscript has been authored by authors at Lawrence Berkeley National Laboratory under Contract No. DE-AC02-05CH11231 with the U.S. Department of Energy. The U.S. Government retains, and the publisher, by accepting the article for publication, acknowledges, that the U.S. Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for U.S. Government purposes.
**License:**
Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) β <https://creativecommons.org/licenses/by-nc-nd/4.0/>
**Source:**
<https://escholarship.org/uc/item/5vn8p2js>
---
#### 4. Razzhigaev et al. β *Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion*
**Files:**
- `2310.03502text_to_image_synthesis1-7-p005.pdf`
- `2310.03502text_to_image_synthesis1-7-p006.pdf`
**Citation:**
Razzhigaev, A., Shakhmatov, A., Maltseva, A., et al. 2023. *Kandinsky: an Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion*.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.48550/arXiv.2310.03502>
---
#### 5. Katsouris β *Optimal Estimation Methodologies for Panel Data Regression Models*
**Files:**
- `OptimalEstimationMethodologies-for-PanelDataRegressionModels-pg9-12-p002.pdf`
- `OptimalEstimationMethodologies-for-PanelDataRegressionModels-pg9-12-p003.pdf`
**Citation:**
Katsouris, C. 2023. *Optimal Estimation Methodologies for Panel Data Regression Models*.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.48550/arXiv.2311.03471>
---
#### 6. Singh et al. β *The Role of Colour in Influencing Consumer Buying Behaviour: An Empirical Study*
**File:**
- `661_Singh_p9-9.pdf`
**Citation:**
Singh, P. K., Kumari, A., Agrawal, S., et al. (2023). *The Role of Colour in Influencing Consumer Buying Behaviour: An Empirical Study*.
Β© 2023 The Author(s). Published by Vilnius Gediminas Technical University
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://businessmanagementeconomic.org/pdf/2023/661.pdf>
---
#### 7. Degerman β *Brexit anxiety: a case study in the medicalization of dissent*
**Files:**
- `ijerph-19-00825-p008.pdf`
- `ijerph-19-00825-p020.pdf`
**Citation:**
Degerman, D. (2018). *Brexit anxiety: a case study in the medicalization of dissent*. Critical Review of International Social and Political Philosophy, 22(7), 823β840.
Β© 2018 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <http://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.1080/13698230.2018.1438334>
---
#### 8. Zhang & Ilavsky β *Bridging length scales in hard materials with ultra-small angle X-ray scattering β a critical review*
**Files:**
- `Zhand-Ilavsky-p004.pdf`
- `Zhand-Ilavsky-p012.pdf`
**Citation:**
Zhang, F., & Ilavsky, J. (2024). *Bridging length scales in hard materials with ultra-small angle X-ray scattering β a critical review*. IUCrJ, 11, 675β694.
Β© International Union of Crystallography
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.1107/S2052252524006298>
---
#### 9. OβHara et al. β *Regional-scale patterns of deep seafloor biodiversity for conservation assessment*
**Files:**
- `O27Hara_DeepSeaFloorBio-p001.pdf`
- `O27Hara_DeepSeaFloorBio-p002.pdf`
**Citation:**
O'Hara, T. D., Williams, A., Althaus, F., et al. (2020). *Regional-scale patterns of deep seafloor biodiversity for conservation assessment*. Diversity and Distributions, 26, 479β494.
Β© 2020 The Authors. Diversity and Distributions Published by John Wiley & Sons Ltd.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.1111/ddi.13034>
---
#### 10. Raimondi et al. β *Rainwater Harvesting and Treatment: State of the Art and Perspectives*
**File:**
- `water-15-0151828729_p3-3.pdf`
**Citation:**
Raimondi, A., Quinn, R., Abhijith, G. R., et al. (2023). *Rainwater Harvesting and Treatment: State of the Art and Perspectives*. Water, 15(8), 1518.
Β© 2023 by the authors.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.3390/w15081518>
---
#### 11. Hunt et al. β *Artificial Intelligence, Big Data, and mHealth: The Frontiers of the Prevention of Violence Against Children*
**Files:**
- `frai_03_543305_p1-2-p001.pdf`
- `frai_03_543305_p1-2-p002.pdf`
**Citation:**
Hunt, X., Tomlinson, M., Sikander, S., Skeen, S., Marlow, M., du Toit, S., & Eisner, M. (2020). *Artificial Intelligence, Big Data, and mHealth: The Frontiers of the Prevention of Violence Against Children*. Frontiers in Artificial Intelligence, 3, 543305.
Copyright 2020 the authors.
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://doi.org/10.3389/frai.2020.543305>
---
#### 12. World Intellectual Property Organization β *WIPO Financial Report*
**Files:**
- `wipo-2022-financial-report-p24-p30-p001.pdf`
- `wipo-2022-financial-report-p24-p30-p005.pdf`
**Citation:**
World Intellectual Property Organization (WIPO). *WIPO Financial Report*.
Β© WIPO, 2021
**License:**
Creative Commons Attribution 4.0 International (CC BY 4.0) β <https://creativecommons.org/licenses/by/4.0/>
**Source:**
<https://www.wipo.int/edocs/pubdocs/en/wipo_pub_rn2021_18e.pdf>
---
##3 Usage reminder
- Unstructured-authored annotations and metadata: **CC BY 4.0**
- Third-party PDFs: **original licenses as listed per file above**
Any reuse of this repository must respect both the Unstructured license and relevant third-party licenses, along with all terms set forth in the original documents, including disclaimers and non-endorsement statements.
## References
**Primary Citation**
* **Title:** SCORE: A Semantic Evaluation Framework for Generative Document Parsing
* **Authors:** Renyu Li, Antonio Jimeno Yepes, Yao You, Kamil PluciΕski, Maximilian Operlejn, and Crag Wolfe
* **Organization:** Unstructured Technologies
* **Abstract:** This work introduces the framework used to evaluate this benchmark, detailing the methodology for Adjusted Edit Distance, token-level diagnostics, and format-agnostic table evaluation.
## Evaluation Code
[https://github.com/Unstructured-IO/unstructured-eval-metrics](https://github.com/Unstructured-IO/unstructured-eval-metrics) |