Text Generation
Transformers
PyTorch
llama
text-generation-inference
zhijianma's picture
Update README.md
ef3671c
metadata
license: apache-2.0
datasets:
  - datajuicer/alpaca-cot-zh-refined-by-data-juicer

News

Our first data-centric LLM competition begins! Please visit the competition's official websites, FT-Data Ranker (1B Track, 7B Track), for more information.

Introduction

This is a reference LLM from Data-Juicer.

The model architecture is LLaMA2-7B and we built it upon the a pre-trained Chinese checkpoint from FlagAlpha. The model is fine-trained on 52k Chinese chat samples of Data-Juicer's refined alpaca-CoT data. It beats LLaMA2-7B fine-tuned on 543k Belle samples in GPT-4 evaluation.

For more details, please refer to our paper.

exp_llama