license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: query
dtype: string
- name: docs
sequence: string
- name: scores
sequence: float64
splits:
- name: train
num_bytes: 957899062
num_examples: 502939
download_size: 917834227
dataset_size: 957899062
Dataset Card for MS MARCO Hard Negatives LLM Scores (OpenSearch)
Dataset Summary
This dataset is derived from the MS MARCO train split(Hugging Face) and provides hard-negative mining annotations to train retrieval systems. For each query from the source split, we retrieve the top-100 candidate documents using the opensearch-project/opensearch-neural-sparse-encoding-doc-v1 and attach re-ranking scores from bi-encoder teachers and cross-encoder teachers: opensearch-project/opensearch-neural-sparse-encoding-v1, Alibaba-NLP/gte-large-en-v1.5, BAAI/bge-en-icl, cross-encoder/ms-marco-MiniLM-L12-v2, BAAI/bge-reranker-v2-minicpm-layerwise, and BAAI/bge-reranker-v2.5-gemma2-lightweight.
⚠️ Licensing/Usage: Because this dataset is derived from MS MARCO, please review Microsoft’s terms before using this dataset. (Microsoft GitHub, GitHub)
How to Load
import datasets
ds = datasets.load_dataset("opensearch-project/msmarco-hard-negatives-llm-scores", split="train")
Training example
Related training example: opensearch-sparse-model-tuning-sample. (GitHub)
To convert the dataset to text-only format for sample repo training:
import datasets
# 1) Load datasets
msmarco_hard_negatives = datasets.load_dataset(
"opensearch-project/msmarco-hard-negatives-llm-scores", split="train"
)
msmarco_queries = datasets.load_dataset("BeIR/msmarco", "queries")["queries"]
msmarco_corpus = datasets.load_dataset("BeIR/msmarco", "corpus")["corpus"]
# 2) fix occasional text encoding issues
def transform_str(s):
try:
s = s.encode("latin1").decode("utf-8")
return s
except Exception:
return s
msmarco_corpus = msmarco_corpus.map(
lambda x: {"text": transform_str(x["text"])}, num_proc=30
)
# 3) Build convenient lookup tables
id_to_text = {_id: text for _id, text in zip(msmarco_corpus["_id"], msmarco_corpus["text"])}
qid_to_text = {_id: text for _id, text in zip(msmarco_queries["_id"], msmarco_queries["text"])}
# 4) Replace IDs with raw texts to get a text-only dataset
msmarco_hard_negatives = msmarco_hard_negatives.map(
lambda x: {
"query": qid_to_text[x["query"]],
"docs": [id_to_text[doc] for doc in x["docs"]],
},
num_proc=30,
)
# 5) Save to disk (directory will contain the text-only view)
msmarco_hard_negatives.save_to_disk("data/msmarco_ft_llm_scores")
Citation
If you use this dataset, please cite: Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers
@misc{geng2024competitivesearchrelevanceinferencefree,
title={Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers},
author={Zhichao Geng and Dongyu Ru and Yang Yang},
year={2024},
eprint={2411.04403},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2411.04403},
}
Related Papers
License
This project is licensed under the Apache v2.0 License.
Copyright
Copyright OpenSearch Contributors. See NOTICE for details.