Datasets:
File size: 9,560 Bytes
60c024a e3a9335 e5c7b99 e3a9335 e5c7b99 e3a9335 e5c7b99 60c024a e3a9335 e5c7b99 e3a9335 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 |
---
language:
- en
license: apache-2.0
size_categories:
- 10K<n<100K
task_categories:
- reinforcement-learning
- text-generation
- question-answering
tags:
- reasoning
- reinforcement-learning
- rlhf
- table-qa
- table-r1
- table-reasoning
- tabular-data
- verl
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Table-R1-Zero (VERL Format)
This dataset contains **69,265** table reasoning problems from the Table-R1-Zero-Dataset, converted to VERL (Volcano Engine Reinforcement Learning) format for reinforcement learning training workflows.
**Source**: [Table-R1/Table-R1-Zero-Dataset](https://huggingface.co/datasets/Table-R1/Table-R1-Zero-Dataset)
**License**: Apache 2.0
> **Note**: System prompts have been removed from all examples for better compatibility with other VERL datasets. The dataset now contains only user messages with table reasoning problems. Ground truth answers have been converted from list format to JSON string format for consistency.
## Dataset Description
Table-R1-Zero is a curated collection of table reasoning problems designed for training language models to understand and reason over structured tabular data. The problems require models to:
- Parse and understand table structures
- Answer questions based on table content
- Perform reasoning across table rows and columns
- Handle various table formats and question types
- Extract specific information from complex tables
The dataset includes problems from multiple sources including WikiTableQuestions (WTQ) and other table QA benchmarks, making it diverse and challenging for model training.
## Dataset Structure
The dataset follows the VERL format with the following fields:
- **`data_source`** (string): Original source identifier (e.g., "WTQ" for WikiTableQuestions)
- **`prompt`** (list): Chat template format with role/content structure
- Contains user message with table and question
- System prompts removed for compatibility
- **`ability`** (string): Task category - always "table_reasoning" for this dataset
- **`reward_model`** (dict): Evaluation information for RL training
- `style` (string): Evaluation method - "rule" for answer-based evaluation
- `ground_truth` (string): Expected answer(s) in JSON array format (e.g., `["2004"]`)
- **`extra_info`** (dict): Additional metadata
- `index` (int64): Sequential example index
### Schema Details
```python
{
'data_source': 'WTQ',
'prompt': [
{
'role': 'user',
'content': 'Instruction\nAnswer the question based on the provided table...'
}
],
'ability': 'table_reasoning',
'reward_model': {
'style': 'rule',
'ground_truth': '["answer"]'
},
'extra_info': {
'index': 0
}
}
```
### Sample Problem
```python
{
"data_source": "WTQ",
"prompt": [
{
"role": "user",
"content": "Instruction\nAnswer the question based on the provided table.\n\n\nTable\nTable Title: Portland Timbers (2001–10)\nTable Content:\n| Year | Division | League | Regular Season | Playoffs | Open Cup | Avg. Attendance |\n| 2001 | 2 | USL A-League | 4th, Western | Quarterfinals | Did not qualify | 3,862 |\n| 2002 | 2 | USL A-League | 2nd, Pacific | 1st Round | Did not qualify | 4,684 |\n| 2003 | 2 | USL A-League | 3rd, Western | Conference Semifinals | 3rd Round | 5,109 |\n| 2004 | 2 | USL A-League | 1st, Western | Conference Finals | 2nd Round | 5,024 |\n| 2005 | 2 | USL First Division | 5th | Quarterfinals | 4th Round | 6,028 |\n\nQuestion\nwhat was the last year where this team was a part of the usl a-league?\n\nAnswer Format\nThe final answer should be concise and use the following format:\n```json\n{\n \"answer\": [\n \"answer1\",\n \"answer2\",\n ...\n ]\n}\n```"
}
],
"ability": "table_reasoning",
"reward_model": {
"style": "rule",
"ground_truth": "[\"2004\"]"
},
"extra_info": {
"index": 0
}
}
```
## Usage
```python
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("sungyub/table-r1-zero-verl")
# Load specific split
train_dataset = load_dataset("sungyub/table-r1-zero-verl", split="train")
test_dataset = load_dataset("sungyub/table-r1-zero-verl", split="test")
# Access an example
example = dataset['train'][0]
print(example['prompt'][0]['content']) # Table and question
print(example['reward_model']['ground_truth']) # Expected answer
print(example['data_source']) # Source dataset
# Stream the dataset for memory efficiency
dataset = load_dataset("sungyub/table-r1-zero-verl", streaming=True)
for example in dataset['train']:
# Process examples one at a time
pass
```
## Statistics
### Overall
- **Total examples**: 69,265
- **Train split**: 48,563 examples (70.1%)
- **Test split**: 20,702 examples (29.9%)
- **Format**: Parquet files with Git LFS
- **Total size**: ~40 MB (compressed)
### Data Sources
The problems are primarily sourced from:
- **WTQ (WikiTableQuestions)**: Table QA benchmark dataset
- Other table reasoning datasets from the Table-R1 collection
### Answer Statistics
- Most examples have single answer (e.g., `["2004"]`)
- Some examples have multiple valid answers (e.g., `["Bangkok", "Thailand"]`)
- All answers encoded as JSON string arrays for consistency
## Data Quality
**High-Quality Problems**:
- ✅ **Structured data** - Well-formatted tables with clear schemas
- ✅ **RL-focused** - Designed for reinforcement learning training
- ✅ **Verified answers** - Ground truth answers for reward model evaluation
- ✅ **Compatible format** - Matches structure of other VERL datasets
- ✅ **Clean prompts** - System prompts removed for consistency
- ✅ **Diverse sources** - Multiple table QA benchmarks included
## Problem Types
The dataset covers various table reasoning challenges including:
1. **Lookup queries** - Finding specific values in tables
2. **Aggregation** - Counting, summing, averaging operations
3. **Comparison** - Finding max/min, comparing values across rows
4. **Temporal reasoning** - Date-based questions and year comparisons
5. **Multi-hop reasoning** - Combining information from multiple rows/columns
6. **Filtering and sorting** - Identifying items matching criteria
## Conversion Details
The conversion process from the original Table-R1-Zero-Dataset:
1. **Loaded source dataset** from HuggingFace Hub (train and test splits)
2. **Removed system prompts** for compatibility with other VERL datasets
3. **Converted ground truth** from `List[string]` to JSON-encoded string format
4. **Applied strict VERL schema** with sequential indexing in extra_info
5. **Reordered dictionary keys** using PyArrow schema casting for consistency
6. **Output to Parquet format** with train/test splits maintained
7. **Validated against reference datasets** (skywork-or1-code-verl)
### Key Transformations
- Original: `ground_truth: ["2004"]` (list type)
- Converted: `ground_truth: "[\"2004\"]"` (string type, JSON encoded)
- Removed: `id` and `task_type` fields from extra_info
- Added: Sequential `index` field starting from 0
Conversion script: `transform_to_verl.py` (included in repository)
## Use Cases
This dataset is ideal for:
- **Reinforcement Learning**: Training models on table reasoning with RL algorithms
- **Fine-tuning**: Improving structured data understanding capabilities
- **Table QA**: Training models to answer questions about tabular data
- **Dataset Merging**: Compatible with other VERL datasets for combined training
- **Evaluation**: Test split for assessing table reasoning capabilities
- **Multi-task Learning**: Can be combined with code/math VERL datasets
## Technical Details
### VERL Format Benefits
- **Standardized structure**: Consistent across all VERL datasets
- **Rich metadata**: Includes source information and indexing
- **Chat template**: Ready for instruction-tuned models
- **Reward model integration**: Ground truth answers for RL training
- **Dataset compatibility**: Works seamlessly with other VERL datasets
- **Efficient storage**: Parquet format with columnar compression
### Schema Compatibility
This dataset uses the same schema as:
- [sungyub/skywork-or1-code-verl](https://huggingface.co/datasets/sungyub/skywork-or1-code-verl)
- [sungyub/eurus-2-code-verl](https://huggingface.co/datasets/sungyub/eurus-2-code-verl)
- [sungyub/openr1-math-verl](https://huggingface.co/datasets/sungyub/openr1-math-verl)
All fields follow strict ordering and typing for maximum compatibility across the VERL ecosystem.
## Additional Information
For more information about VERL format and usage:
- [VERL Documentation](https://verl.readthedocs.io/en/latest/preparation/prepare_data.html)
- [VERL GitHub Repository](https://github.com/volcengine/verl)
## Citation
If you use this dataset, please cite the original Table-R1-Zero-Dataset:
```bibtex
@misc{table-r1-zero-dataset,
title={Table-R1-Zero-Dataset},
author={Table-R1},
year={2024},
publisher={HuggingFace},
url={https://huggingface.co/datasets/Table-R1/Table-R1-Zero-Dataset}
}
```
## Changelog
### 2025-10-29 - Initial Release
- Converted 69,265 table reasoning problems to VERL format
- Split into train (48,563) and test (20,702) sets
- Removed system prompts for compatibility with other VERL datasets
- Converted ground truth from list to JSON string format
- Applied strict VERL schema with sequential indexing
- Validated against reference VERL datasets
- Maintained original train/test splits from source dataset
|