language:
- en
license: cc-by-4.0
task_categories:
- text-retrieval
- text-classification
- question-answering
dataset_info:
features:
- name: query
dtype: string
- name: dqc_id
dtype: string
- name: answer
dtype: string
- name: id
dtype: int64
splits:
- name: test
num_bytes: 35959685
num_examples: 330
download_size: 2881818
dataset_size: 35959685
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
papers:
- title: >-
FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for
Evaluating LLMs
authors:
- Yan Wang
- Keyi Wang
- Shanshan Yang
- Jaisal Patel
- Jeff Zhao
- Fengran Mo
- Xueqing Peng
- Lingfei Qian
- Jimin Huang
- Guojun Xiong
- Xiao-Yang Liu
- Jian-Yun Nie
url: https://arxiv.org/abs/2510.08886
conference: arXiv preprint, 2025
tags:
- finance
- auditing
- xbrl
- gaap
- llm
- benchmark
- financial-reasoning
π§Ύ FinAuditing Benchmark
This dataset is introduced in the paper
FinAuditing: A Financial Taxonomy-Structured Multi-Document Benchmark for Evaluating LLMs
by Yan Wang, Keyi Wang, Shanshan Yang, Jaisal Patel, Jeff Zhao, Fengran Mo, Xueqing Peng, Lingfei Qian, Jimin Huang, Guojun Xiong, Xiao-Yang Liu, and Jian-Yun Nie (2025).
The FinAuditing benchmark is designed to evaluate Large Language Models (LLMs) on financial auditing tasks, particularly their ability to reason over structured, interdependent, and taxonomy-driven financial documents. It addresses the challenges posed by the complexity of Generally Accepted Accounting Principles (GAAP) and the hierarchical structure of eXtensible Business Reporting Language (XBRL) filings. The benchmark defines three complementary subtasks: FinSM for semantic consistency, FinRE for relational consistency, and FinMR for numerical consistency, each targeting a distinct aspect of structured auditing reasoning.
Code: https://github.com/The-FinAI/FinAuditing.git
π Overview
The FinAuditing benchmark dataset is built from real US-GAAP-compliant XBRL filings and provides evaluation sets for various subtasks. For the evaluation framework, please refer to the FinBen repository.
π Datasets Released
The following evaluation datasets are part of the FinAuditing benchmark:
- FinSM - Evaluation set for the FinSM subtask within FinAuditing benchmark. This task follows an information retrieval paradigm: given a query describing a financial term (representing either currency or concentration of credit risk), an XBRL filing, and a US-GAAP taxonomy, the output is the set of mismatched US-GAAP tags after retrieval.
- FinRE - Evaluation set for the FinRE subtask within FinAuditing benchmark. This is a relation extraction task: given two specific elements $e_1$ and $e_2$, an XBRL filing, and a US-GAAP taxonomy, the goal is to classify three relation error types.
- FinMR - Evaluation set for the FinMR subtask within FinAuditing benchmark. This is a mathematical reasoning task: given two questions $q_1$ and $q_2$ (where $q_1$ concerns the extraction of a reported value and $q_2$ pertains to the calculation of the corresponding real value), an XBRL filing, and a US-GAAP taxonomy, the task is to extract the reported value for a given instance in the XBRL filing and to compute the numeric value for that instance, which is then used to verify whether the reported value is correct.
- FinSM_Sub - FinSM subset for ICAIF 2025.
- FinRE_Sub - FinRE subset for ICAIF 2025.
- FinMR_Sub - FinMR subset for ICAIF 2025.