Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,11 @@ configs:
|
|
| 9 |
data_files: sqa_query.jsonl
|
| 10 |
---
|
| 11 |
|
| 12 |
-
π [Paper](https://
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
|
| 14 |
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
|
| 15 |
| Dataset | Link |
|
|
@@ -23,32 +27,6 @@ For MultiTableQA, we release a comprehensive benchmark, including five different
|
|
| 23 |
|
| 24 |
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
|
| 25 |
|
| 26 |
-
## Sample Usage
|
| 27 |
-
|
| 28 |
-
This dataset (`MultiTableQA-SQA`) is part of the larger **MultiTableQA** benchmark. To prepare the full benchmark, you can follow these steps from the official T-RAG GitHub repository.
|
| 29 |
-
|
| 30 |
-
First, clone the repository and set up the environment:
|
| 31 |
-
|
| 32 |
-
```bash
|
| 33 |
-
git clone https://github.com/jiaruzouu/T-RAG.git
|
| 34 |
-
cd T-RAG
|
| 35 |
-
|
| 36 |
-
conda create -n trag python=3.11.9
|
| 37 |
-
conda activate trag
|
| 38 |
-
|
| 39 |
-
# Install dependencies
|
| 40 |
-
pip install -r requirements.txt
|
| 41 |
-
```
|
| 42 |
-
|
| 43 |
-
Then, navigate to the `table2graph` directory and run the data preparation script:
|
| 44 |
-
|
| 45 |
-
```bash
|
| 46 |
-
cd table2graph
|
| 47 |
-
bash scripts/prepare_data.sh
|
| 48 |
-
```
|
| 49 |
-
|
| 50 |
-
This script will automatically fetch the source tables, apply decomposition (row/column splitting), and generate the benchmark splits, including the `MultiTableQA-SQA` data available in this repository.
|
| 51 |
-
|
| 52 |
---
|
| 53 |
# Citation
|
| 54 |
|
|
|
|
| 9 |
data_files: sqa_query.jsonl
|
| 10 |
---
|
| 11 |
|
| 12 |
+
π [Paper](https://arxiv.org/abs/2504.01346) | π¨π»βπ» [Code](https://github.com/jiaruzouu/T-RAG)
|
| 13 |
+
|
| 14 |
+
## π Introduction
|
| 15 |
+
|
| 16 |
+
Retrieval-Augmented Generation (RAG) has become a key paradigm to enhance Large Language Models (LLMs) with external knowledge. While most RAG systems focus on **text corpora**, real-world information is often stored in **tables** across web pages, Wikipedia, and relational databases. Existing methods struggle to retrieve and reason across **multiple heterogeneous tables**.
|
| 17 |
|
| 18 |
For MultiTableQA, we release a comprehensive benchmark, including five different datasets covering table fact-checking, single-hop QA, and multi-hop QA:
|
| 19 |
| Dataset | Link |
|
|
|
|
| 27 |
|
| 28 |
MultiTableQA extends the traditional single-table QA setting into a multi-table retrieval and question answering benchmark, enabling more realistic and challenging evaluations.
|
| 29 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
---
|
| 31 |
# Citation
|
| 32 |
|