Datasets:
license: mit
language:
- hi
tags:
- life-sciences
- clinical
- biomedical
- bio
- medical
- biology
- synthetic
pretty_name: TransCorpus-bio-hi
size_categories:
- 10M<n<100M
TransCorpus-bio-hi
TransCorpus-bio-hi is a large-scale, parallel biomedical corpus consisting of Hindi synthetic translations of PubMed abstracts. This dataset was created using the TransCorpus framework and is designed to enable high-quality Hindi biomedical language modeling and downstream NLP research.
Dataset Details
- Source: PubMed abstracts (English)
- Target: Hindi (synthetic, machine-translated)
- Translation Model: M2M-100 (1.2B) using TransCorpus Toolkit
- Size: 22 million abstracts, 34.6GB of text
- Domain: Biomedical, clinical, life sciences
- Format: one abstract per line
Motivation
Hindi is a low-resource language for biomedical NLP, with limited availability of large, high-quality corpora. TransCorpus-bio-hi bridges this gap by leveraging state-of-the-art neural machine translation to generate a massive, high-quality synthetic corpus, enabling robust pretraining and evaluation of Hindi biomedical language models.
from datasets import load_dataset
dataset = load_dataset("jknafou/TransCorpus-bio-hi", split="train")
print(dataset)
# Output:
# Dataset({
# features: ['text'],
# num_rows: 21567136
# })
print(dataset[0])
# {'text': '[ कैमोमाइल घटक / III पर जैव रसायन अध्ययन। (--)-अल्फा-बिसाबोलोल के एंटीपेप्टिक गतिविधि के बारे में in vitro अध्ययन (संपादक का अनुवाद)]. (--)-अल्फा-बिसाबोलोल में खुराक के आधार पर एक प्राथमिक एंटीपेप्टिक कार्रवाई होती है, जो पीएच-मान में परिवर्तन के कारण नहीं होती है। पेप्सिन की प्रोटीओलिस्टिक गतिविधि 1/0.5 के अनुपात में bisabolol जोड़ने के माध्यम से 50 प्रतिशत कम होती है। Bisabolol का एंटीपेप्टिक प्रभाव केवल सीधे संपर्क के मामले में होता है। सब्सट्रेट के साथ पिछले संपर्क के मामले में, अवरोधक प्रभाव खो जाता है। '}
Benchmark Results in our French Experiment
TransBERT-bio-fr pretrained on TransCorpus-bio-fr achieve state-of-the-art results on the French biomedical benchmark DrBenchmark, outperforming both general-domain and previous domain-specific models on classification, NER, POS, and STS tasks. See TransBERT-bio-fr for details.
Why Synthetic Translation?
- Scalable: Enables creation of large-scale corpora for any language with a strong MT system.
- Effective: Supports state-of-the-art performance in downstream tasks.
- Accessible: Makes domain-specific NLP feasible for any languages.
Citation
If you use this corpus, please cite:
@inproceedings{knafou-etal-2025-transbert,
title = "{T}rans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling",
author = {Knafou, Julien and
Mottin, Luc and
Mottaz, Ana{\"i}s and
Flament, Alexandre and
Ruch, Patrick},
editor = "Christodoulopoulos, Christos and
Chakraborty, Tanmoy and
Rose, Carolyn and
Peng, Violet",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
month = nov,
year = "2025",
address = "Suzhou, China",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.findings-emnlp.1053/",
doi = "10.18653/v1/2025.findings-emnlp.1053",
pages = "19338--19354",
ISBN = "979-8-89176-335-7",
abstract = "The scarcity of non-English language data in specialized domains significantly limits the development of effective Natural Language Processing (NLP) tools. We present TransBERT, a novel framework for pre-training language models using exclusively synthetically translated text, and introduce TransCorpus, a scalable translation toolkit. Focusing on the life sciences domain in French, our approach demonstrates that state-of-the-art performance on various downstream tasks can be achieved solely by leveraging synthetically translated data. We release the TransCorpus toolkit, the TransCorpus-bio-fr corpus (36.4GB of French life sciences text), TransBERT-bio-fr, its associated pre-trained language model and reproducible code for both pre-training and fine-tuning. Our results highlight the viability of synthetic translation in a high-resource translation direction for building high-quality NLP resources in low-resource language/domain pairs."
}