nq-fa / README.md
mehran-sarmadi's picture
Update README.md
404dce6 verified
metadata
configs:
  - config_name: default
    data_files:
      - split: test
        path: qrels/test.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

Dataset Summary

NQ-Fa is a Persian (Farsi) dataset created for the Retrieval task, specifically targeting open-domain question answering. It is a translated version of the original English Natural Questions (NQ) dataset and a central component of the FaMTEB (Farsi Massive Text Embedding Benchmark), as part of the BEIR-Fa collection.

  • Language(s): Persian (Farsi)
  • Task(s): Retrieval (Question Answering)
  • Source: Translated from English NQ using Google Translate
  • Part of FaMTEB: Yes — under BEIR-Fa

Supported Tasks and Leaderboards

This dataset evaluates how well text embedding models can retrieve relevant answer passages from Persian Wikipedia in response to natural language questions, originally issued to Google Search. Results are benchmarked on the Persian MTEB Leaderboard on Hugging Face Spaces (language filter: Persian).

Construction

The construction process included:

  • Starting with the Natural Questions (NQ) English dataset, containing real user search queries
  • Using the Google Translate API to translate both questions and annotated Wikipedia passages into Persian
  • Retaining original query-passage mapping structure for retrieval evaluation

As described in the FaMTEB paper, all BEIR-Fa datasets (including NQ-Fa) underwent:

  • BM25 retrieval comparison between English and Persian
  • LLM-based translation quality check using the GEMBA-DA framework

These evaluations confirmed a high level of translation quality.

Data Splits

Defined in the FaMTEB paper (Table 5):

  • Train: 0 samples
  • Dev: 0 samples
  • Test: 2,685,669 samples

Total: ~2.69 million examples (according to metadata)