Datasets:
library_name: transformers
tags: []
license: cc-by-4.0
pipeline_tag: feature-extraction
Model Card for Model ID
This model identifies and relabels false negatives in IR training datasets as described in the paper Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval. It is based on the e5-base model.
Model Details
- Developed by: [More Information Needed]
- Model type: BertModel
- Language(s) (NLP): en
- License: cc-by-4.0
- Finetuned from model: models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default
Model Sources
- Repository: Automatically Generated
- Paper: Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval
- Code: https://github.com/studio-name/rlhn
Uses
Direct Use
This model is designed for identifying and relabeling hard negatives in information retrieval training datasets. It can be used to improve the quality of training data for retrieval and reranker models.
Downstream Use
Fine-tuning retrieval and reranker models using the relabeled data can lead to significant improvements in retrieval effectiveness, especially on out-of-distribution datasets.
Out-of-Scope Use
This model is not intended for use in applications that require real-time or low-latency performance, as the relabeling process involves computationally intensive LLM inference.
Bias, Risks, and Limitations
The effectiveness of this model depends on the quality and diversity of the LLMs used for relabeling. Biases in the LLMs may lead to biased relabeling and affect the performance of downstream models.
Recommendations
Users should be aware of the potential biases and limitations of the LLMs used for relabeling and carefully evaluate the impact of the relabeled data on the performance of downstream models.
How to Get Started with the Model
Use the model with the transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "models/e5-base-unsupervised-bge-retrieval-7-datasets-250K-default"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Example usage
text = "This is an example sentence."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
embeddings = outputs.last_hidden_state
print(embeddings.shape)
Training Details
Training Data
The model used here was trained on a subset of the BGE collection and has a vocab size of 30522.
Training Procedure
The model was fine-tuned using a semi-supervised approach with LLMs to relabel hard negatives.
Training Hyperparameters
- Training regime: bfloat16 mixed precision
Evaluation
Testing Data, Factors & Metrics
Testing Data
BEIR and AIR-Bench
Metrics
nDCG@10
Results
Relabeling false negatives with true positives improves both E5 (base) and Qwen2.5-7B retrieval models by 0.7-1.4 nDCG@10 on BEIR and by 1.7-1.8 nDCG@10 on zero-shot AIR-Bench evaluation. Similar gains are observed for rerankers fine-tuned on the relabeled data, such as Qwen2.5-3B on BEIR.
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation
@misc{luo2024semievol,
title={Fixing Data That Hurts Performance: Cascading LLMs to Relabel Hard Negatives for Robust Information Retrieval},
author={Junyu Luo and Xiao Luo and Xiusi Chen and Zhiping Xiao and Wei Ju and Ming Zhang},
year={2024},
eprint={2410.14745},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.14745},
}