--- license: unknown language: - de - en task_categories: - text-classification --- # QT21 De-En MQM Task from the PEER Benchmark (Performance Evaluation of Edit Representations) Description from the [benchmark paper](https://arxiv.org/abs/2004.09143): > A subset of 1,800 examples of the De-En QT21 dataset, annotated with details about the edits performed, namely the reason why each edit was applied. Since the dataset contains a large number of edit labels, we select the classes that are present in at least 100 examples and generate a modified version of the dataset for our purposes. Examples where no post-edit has been performed are also ignored. The dataset was originally published at https://doi.org/10.5281/zenodo.4478266. ## Citations PEER Benchmark: ```bibtex @article{marrese-taylor-et-al-2021, title = {Variational Inference for Learning Representations of Natural Language Edits}, volume = {35}, url = {https://ojs.aaai.org/index.php/AAAI/article/view/17598}, DOI = {10.1609/aaai.v35i15.17598}, number = {15}, journal = {Proceedings of the AAAI Conference on Artificial Intelligence}, author = {Marrese-Taylor, Edison and Reid, Machel and Matsuo, Yutaka}, year = {2021}, month = {May}, pages = {13552-13560}, } ``` Original data source: ```bibtex @inproceedings{burchardt-2013-multidimensional, title = "Multidimensional quality metrics: a flexible system for assessing translation quality", author = "Lommel, Arle Richard and Burchardt, Aljoscha and Uszkoreit, Hans", booktitle = "Proceedings of Translating and the Computer 35", month = nov # " 28-29", year = "2013", address = "London, UK", publisher = "Aslib", url = "https://aclanthology.org/2013.tc-1.6/" } ```