Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -55,12 +55,26 @@ print(dataset[0])
|
|
| 55 |
# Citation
|
| 56 |
If you use this corpus, please cite:
|
| 57 |
```text
|
| 58 |
-
@inproceedings{
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
}
|
| 66 |
```
|
|
|
|
| 55 |
# Citation
|
| 56 |
If you use this corpus, please cite:
|
| 57 |
```text
|
| 58 |
+
@inproceedings{knafou-etal-2025-transbert,
|
| 59 |
+
title = "{T}rans{BERT}: A Framework for Synthetic Translation in Domain-Specific Language Modeling",
|
| 60 |
+
author = {Knafou, Julien and
|
| 61 |
+
Mottin, Luc and
|
| 62 |
+
Mottaz, Ana{\"i}s and
|
| 63 |
+
Flament, Alexandre and
|
| 64 |
+
Ruch, Patrick},
|
| 65 |
+
editor = "Christodoulopoulos, Christos and
|
| 66 |
+
Chakraborty, Tanmoy and
|
| 67 |
+
Rose, Carolyn and
|
| 68 |
+
Peng, Violet",
|
| 69 |
+
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025",
|
| 70 |
+
month = nov,
|
| 71 |
+
year = "2025",
|
| 72 |
+
address = "Suzhou, China",
|
| 73 |
+
publisher = "Association for Computational Linguistics",
|
| 74 |
+
url = "https://aclanthology.org/2025.findings-emnlp.1053/",
|
| 75 |
+
doi = "10.18653/v1/2025.findings-emnlp.1053",
|
| 76 |
+
pages = "19338--19354",
|
| 77 |
+
ISBN = "979-8-89176-335-7",
|
| 78 |
+
abstract = "The scarcity of non-English language data in specialized domains significantly limits the development of effective Natural Language Processing (NLP) tools. We present TransBERT, a novel framework for pre-training language models using exclusively synthetically translated text, and introduce TransCorpus, a scalable translation toolkit. Focusing on the life sciences domain in French, our approach demonstrates that state-of-the-art performance on various downstream tasks can be achieved solely by leveraging synthetically translated data. We release the TransCorpus toolkit, the TransCorpus-bio-fr corpus (36.4GB of French life sciences text), TransBERT-bio-fr, its associated pre-trained language model and reproducible code for both pre-training and fine-tuning. Our results highlight the viability of synthetic translation in a high-resource translation direction for building high-quality NLP resources in low-resource language/domain pairs."
|
| 79 |
}
|
| 80 |
```
|