Update README.md
Browse files
README.md
CHANGED
|
@@ -24,6 +24,28 @@ All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA
|
|
| 24 |
|
| 25 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
|
| 28 |
|
| 29 |
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
|
|
|
|
| 24 |
|
| 25 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
|
| 26 |
|
| 27 |
+
# Citation
|
| 28 |
+
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
|
| 29 |
+
```
|
| 30 |
+
@misc{https://doi.org/10.48550/arxiv.2205.02340,
|
| 31 |
+
doi = {10.48550/ARXIV.2205.02340},
|
| 32 |
+
|
| 33 |
+
url = {https://arxiv.org/abs/2205.02340},
|
| 34 |
+
|
| 35 |
+
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
|
| 36 |
+
|
| 37 |
+
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
|
| 38 |
+
|
| 39 |
+
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
|
| 40 |
+
|
| 41 |
+
publisher = {arXiv},
|
| 42 |
+
|
| 43 |
+
year = {2022},
|
| 44 |
+
|
| 45 |
+
copyright = {arXiv.org perpetual, non-exclusive license}
|
| 46 |
+
}
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
|
| 50 |
|
| 51 |
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
|