Update README.md
Browse files
README.md
CHANGED
|
@@ -54,4 +54,46 @@ To obtain the final number of points, multiply each score with the respective po
|
|
| 54 |
|
| 55 |
## License and Use
|
| 56 |
The problems and their translations are sourced from [IOL](https://ioling.org/), and are copyrighted by ©2003-2024 International Linguistics Olympiad.
|
| 57 |
-
They may only be used for research purposes and evaluation, not training.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
## License and Use
|
| 56 |
The problems and their translations are sourced from [IOL](https://ioling.org/), and are copyrighted by ©2003-2024 International Linguistics Olympiad.
|
| 57 |
+
They may only be used for research purposes and evaluation, not training.
|
| 58 |
+
|
| 59 |
+
## Citation
|
| 60 |
+
If you use this dataset, please cite it as follows:
|
| 61 |
+
```
|
| 62 |
+
@inproceedings{kocmi-etal-2025-findings-wmt25,
|
| 63 |
+
title = "Findings of the {WMT}25 Multilingual Instruction Shared Task: Persistent Hurdles in Reasoning, Generation, and Evaluation",
|
| 64 |
+
author = "Kocmi, Tom and
|
| 65 |
+
Agrawal, Sweta and
|
| 66 |
+
Artemova, Ekaterina and
|
| 67 |
+
Avramidis, Eleftherios and
|
| 68 |
+
Briakou, Eleftheria and
|
| 69 |
+
Chen, Pinzhen and
|
| 70 |
+
Fadaee, Marzieh and
|
| 71 |
+
Freitag, Markus and
|
| 72 |
+
Grundkiewicz, Roman and
|
| 73 |
+
Hou, Yupeng and
|
| 74 |
+
Koehn, Philipp and
|
| 75 |
+
Kreutzer, Julia and
|
| 76 |
+
Mansour, Saab and
|
| 77 |
+
Perrella, Stefano and
|
| 78 |
+
Proietti, Lorenzo and
|
| 79 |
+
Riley, Parker and
|
| 80 |
+
S{\'a}nchez, Eduardo and
|
| 81 |
+
Schmidtova, Patricia and
|
| 82 |
+
Shmatova, Mariya and
|
| 83 |
+
Zouhar, Vil{\'e}m",
|
| 84 |
+
editor = "Haddow, Barry and
|
| 85 |
+
Kocmi, Tom and
|
| 86 |
+
Koehn, Philipp and
|
| 87 |
+
Monz, Christof",
|
| 88 |
+
booktitle = "Proceedings of the Tenth Conference on Machine Translation",
|
| 89 |
+
month = nov,
|
| 90 |
+
year = "2025",
|
| 91 |
+
address = "Suzhou, China",
|
| 92 |
+
publisher = "Association for Computational Linguistics",
|
| 93 |
+
url = "https://aclanthology.org/2025.wmt-1.23/",
|
| 94 |
+
doi = "10.18653/v1/2025.wmt-1.23",
|
| 95 |
+
pages = "414--435",
|
| 96 |
+
ISBN = "979-8-89176-341-8",
|
| 97 |
+
abstract = "The WMT25 Multilingual Instruction Shared Task (MIST) introduces a benchmark to evaluate large language models (LLMs) across 30 languages. The benchmark covers five types of problems: machine translation, linguistic reasoning, open-ended generation, cross-lingual summarization, and LLM-as-a-judge.We provide automatic evaluation and collect human annotations, which highlight the limitations of automatic evaluation and allow further research into metric meta-evaluation. We run on our benchmark a diverse set of open- and closed-weight LLMs, providing a broad assessment of the multilingual capabilities of current LLMs. Results highlight substantial variation across sub-tasks and languages, revealing persistent challenges in reasoning, cross-lingual generation, and evaluation reliability. This work establishes a standardized framework for measuring future progress in multilingual LLM development."
|
| 98 |
+
}
|
| 99 |
+
```
|