copyrightQA / README.md
kqwang's picture
add real data.
86a57c0
metadata
license: mit
task_categories:
  - question-answering
language:
  - en
size_categories:
  - 10K<n<100K
configs:
  - config_name: dataset
    data_files: dataset.json
  - config_name: forget01
    data_files: forget01.json
  - config_name: forget05
    data_files: forget05.json
  - config_name: forget10
    data_files: forget10.json
  - config_name: retain99
    data_files: retain99.json
  - config_name: retain95
    data_files: retain95.json
  - config_name: retain90
    data_files: retain90.json
  - config_name: full
    data_files: full.json
  - config_name: world_facts
    data_files: world_facts.json
  - config_name: real_authors
    data_files: real_authors.json
  - config_name: world_facts_perturbed
    data_files: world_facts_perturbed.json
  - config_name: real_authors_perturbed
    data_files: real_authors_perturbed.json

CopyrightQA

This dataset is derived from the NarrativeQA dataset, created by Kocisky et al. (2018). NarrativeQA is a dataset for evaluating reading comprehension and narrative understanding.

This dataset is an extraction of the question answer pairs from the original NarrativeQA dataset. It's original use is to evaluate LLMs forgetting ability using TOFU, created by Maini et al. (2024). TOFU is a benchmark for evaluating unlearning performance of LLMs on realistic tasks.

Citation

If you use this dataset, please also cite the original NarrativeQA dataset:

@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
          Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
          Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}