--- base_model: - facebook/opt-2.7B datasets: - databricks/databricks-dolly-15k language: - en license: apache-2.0 metrics: - rouge pipeline_tag: text-generation library_name: transformers --- # SeqKD-OPT-2.7B [paper](https://arxiv.org/abs/2306.08543) | [code](https://github.com/microsoft/LMOps/tree/main/minillm) **SeqKD-OPT-2.7B** is an OPT-2.7B model distilled from [OPT-13B](https://huggingface.co/MiniLLM/teacher-OPT-13B) on [databricks-dolly-15k](https://huggingface.co/datasets/aisquared/databricks-dolly-15k) with sequence-level forward KLD. It is used as a baseline for [MiniLLM](https://huggingface.co/MiniLLM/MiniLLM-OPT-2.7B). ## Other Baselines + [SFT w/o KD](https://huggingface.co/MiniLLM/SFT-OPT-2.7B) + [KD](https://huggingface.co/MiniLLM/KD-OPT-2.7B) ## Citation ``` @inproceedings{minillm, title={MiniLLM: Knowledge Distillation of Large Language Models}, author={Gu, Yuxian and Dong, Li and Wei, Furu and Huang, Minlie}, booktitle={Proceedings of ICLR}, year={2024} } ```