Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
parquet
Sub-tasks:
extractive-qa
Size:
100K - 1M
License:
Created README
Browse files
README.md
ADDED
|
@@ -0,0 +1,118 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
annotations_creators:
|
| 3 |
+
- crowdsourced
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
- ar
|
| 7 |
+
- bn
|
| 8 |
+
- fi
|
| 9 |
+
- id
|
| 10 |
+
- ja
|
| 11 |
+
- sw
|
| 12 |
+
- ko
|
| 13 |
+
- ru
|
| 14 |
+
- te
|
| 15 |
+
- th
|
| 16 |
+
language_creators:
|
| 17 |
+
- crowdsourced
|
| 18 |
+
license:
|
| 19 |
+
- apache-2.0
|
| 20 |
+
multilinguality:
|
| 21 |
+
- multilingual
|
| 22 |
+
pretty_name: Answerable TyDi QA
|
| 23 |
+
size_categories:
|
| 24 |
+
- ['100K<n<1M']
|
| 25 |
+
source_datasets:
|
| 26 |
+
- extended|wikipedia
|
| 27 |
+
task_categories:
|
| 28 |
+
- question-answering
|
| 29 |
+
task_ids:
|
| 30 |
+
- extractive-qa
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
# Dataset Card for "answerable-tydiqa"
|
| 34 |
+
|
| 35 |
+
|
| 36 |
+
## Dataset Description
|
| 37 |
+
|
| 38 |
+
- **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa)
|
| 39 |
+
- **Paper:** [Paper](https://aclanthology.org/2020.tacl-1.30/)
|
| 40 |
+
- **Size of downloaded dataset files:** 75.43 MB
|
| 41 |
+
- **Size of the generated dataset:** 131.78 MB
|
| 42 |
+
- **Total amount of disk used:** 207.21 MB
|
| 43 |
+
|
| 44 |
+
### Dataset Summary
|
| 45 |
+
|
| 46 |
+
[TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages.
|
| 47 |
+
Answerable TyDi QA is an extension of the GoldP subtask of the original TyDi QA dataset to also include unanswertable questions.
|
| 48 |
+
|
| 49 |
+
## Dataset Structure
|
| 50 |
+
|
| 51 |
+
The dataset contains a train and a validation set, with 116067 and 13325 examples, respectively. Access them with
|
| 52 |
+
|
| 53 |
+
```py
|
| 54 |
+
from datasets import load_dataset
|
| 55 |
+
dataset = load_dataset("copenlu/answerable_tydiqa")
|
| 56 |
+
train_set = dataset["train"]
|
| 57 |
+
validation_set = dataset["validation"]
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
### Data Instances
|
| 61 |
+
|
| 62 |
+
Here is an example of an instance of the dataset:
|
| 63 |
+
|
| 64 |
+
```
|
| 65 |
+
{'question_text': 'dimanakah Dr. Ernest François Eugène Douwes Dekker meninggal?',
|
| 66 |
+
'document_title': 'Ernest Douwes Dekker',
|
| 67 |
+
'language': 'indonesian',
|
| 68 |
+
'annotations':
|
| 69 |
+
{'answer_start': [45],
|
| 70 |
+
'answer_text': ['28 Agustus 1950']
|
| 71 |
+
},
|
| 72 |
+
'document_plaintext': 'Ernest Douwes Dekker wafat dini hari tanggal 28 Agustus 1950 (tertulis di batu nisannya; 29 Agustus 1950 versi van der Veur, 2006) dan dimakamkan di TMP Cikutra, Bandung.',
|
| 73 |
+
'document_url': 'https://id.wikipedia.org/wiki/Ernest%20Douwes%20Dekker'}
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
Description of the dataset columns:
|
| 77 |
+
|
| 78 |
+
| Column name | type | Description |
|
| 79 |
+
| ----------- | ----------- | ----------- |
|
| 80 |
+
| document_title | str | The title of the Wikipedia article from which the data instance was generated |
|
| 81 |
+
| document_url | str | The URL of said article |
|
| 82 |
+
| language | str | The language of the data instance |
|
| 83 |
+
| question_text | str | The question to answer |
|
| 84 |
+
| document_plaintext | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question |
|
| 85 |
+
| annotations["answer_start"] | list[int] | The char index in 'document_plaintext' where the answer starts. If the question is unanswerable - an empty list |
|
| 86 |
+
| annotations["answer_text"] | list[str] | The answer, a span of text from 'document_plaintext'. If the question is unanswerable - an empty list |
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
**Notice:** If the question is *answerable*, annotations["answer_start"] and annotations["answer_text"] contain a list of length 1
|
| 90 |
+
(In some variations of the dataset the lists might be longer, e.g. if more than one person annotated the instance, but not in our case).
|
| 91 |
+
If the question is *unanswerable*, annotations["answer_start"] and annotations["answer_text"] contain an empty list of length 0.
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
## Useful stuff
|
| 95 |
+
|
| 96 |
+
Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful:
|
| 97 |
+
|
| 98 |
+
`dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example).
|
| 99 |
+
|
| 100 |
+
`dataset.map`, for manipulating the dataset.
|
| 101 |
+
|
| 102 |
+
`dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format.
|
| 103 |
+
|
| 104 |
+
|
| 105 |
+
```
|
| 106 |
+
@article{tydiqa,
|
| 107 |
+
title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages},
|
| 108 |
+
author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki}
|
| 109 |
+
year = {2020},
|
| 110 |
+
journal = {Transactions of the Association for Computational Linguistics}
|
| 111 |
+
}
|
| 112 |
+
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
### Contributions
|
| 117 |
+
|
| 118 |
+
Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
|