--- license: cc-by-sa-4.0 language: - zh - cs - nl - en - et - fr - de - ja - ko - fa - pt - ru - es - sv - uk pretty_name: WMT25 MIST Multilingual Linguistic Reasoning (MuLR) size_categories: - 1K points assigned for this task will be 0.3*2=0.6, and sum them up by language. ## License and Use The problems and their translations are sourced from [IOL](https://ioling.org/), and are copyrighted by ©2003-2024 International Linguistics Olympiad. They may only be used for research purposes and evaluation, not training. ## Citation If you use this dataset, please cite it as follows: ``` @inproceedings{kocmi-etal-2025-findings-wmt25, title = "Findings of the {WMT}25 Multilingual Instruction Shared Task: Persistent Hurdles in Reasoning, Generation, and Evaluation", author = "Kocmi, Tom and Agrawal, Sweta and Artemova, Ekaterina and Avramidis, Eleftherios and Briakou, Eleftheria and Chen, Pinzhen and Fadaee, Marzieh and Freitag, Markus and Grundkiewicz, Roman and Hou, Yupeng and Koehn, Philipp and Kreutzer, Julia and Mansour, Saab and Perrella, Stefano and Proietti, Lorenzo and Riley, Parker and S{\'a}nchez, Eduardo and Schmidtova, Patricia and Shmatova, Mariya and Zouhar, Vil{\'e}m", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Tenth Conference on Machine Translation", month = nov, year = "2025", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.wmt-1.23/", doi = "10.18653/v1/2025.wmt-1.23", pages = "414--435", ISBN = "979-8-89176-341-8", abstract = "The WMT25 Multilingual Instruction Shared Task (MIST) introduces a benchmark to evaluate large language models (LLMs) across 30 languages. The benchmark covers five types of problems: machine translation, linguistic reasoning, open-ended generation, cross-lingual summarization, and LLM-as-a-judge.We provide automatic evaluation and collect human annotations, which highlight the limitations of automatic evaluation and allow further research into metric meta-evaluation. We run on our benchmark a diverse set of open- and closed-weight LLMs, providing a broad assessment of the multilingual capabilities of current LLMs. Results highlight substantial variation across sub-tasks and languages, revealing persistent challenges in reasoning, cross-lingual generation, and evaluation reliability. This work establishes a standardized framework for measuring future progress in multilingual LLM development." } ```