Datasets:
File size: 2,107 Bytes
0805752 f7f0a11 0805752 f7f0a11 0805752 f7f0a11 7abb544 48dc425 ab61d0c 48dc425 7abb544 f7f0a11 7abb544 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: mit
task_categories:
- table-question-answering
configs:
- config_name: table
data_files: sqa_table.jsonl
- config_name: test_query
data_files: sqa_query.jsonl
---
π [Paper](https://arxiv.org/abs/2504.01346) | π¨π»βπ» [Code](https://github.com/jiaruzouu/T-RAG)
## π Introduction
Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
| Dataset | Link |
|-----------------------|------|
| MultiTableQA-TATQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TATQA) |
| MultiTableQA-TabFact | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_TabFact) |
| MultiTableQA-SQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_SQA) |
| MultiTableQA-WTQ | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_WTQ) |
| MultiTableQA-HybridQA | π€ [dataset link](https://huggingface.co/datasets/jiaruz2/MultiTableQA_HybridQA)|
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
---
# Citation
If you find our work useful, please cite:
```bibtex
@misc{zou2025rag,
title={RAG over Tables: Hierarchical Memory Index, Multi-Stage Retrieval, and Benchmarking},
author={Jiaru Zou and Dongqi Fu and Sirui Chen and Xinrui He and Zihao Li and Yada Zhu and Jiawei Han and Jingrui He},
year={2025},
eprint={2504.01346},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.01346},
}
``` |