Trained Models ποΈ
Collection
They may be small, but they're training like giants!
β’
9 items
β’
Updated
β’
20
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant
do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176
from transformers import pipeline
generate = pipeline("text-generation", "Felladrin/Minueza-32M-Chat")
messages = [
{
"role": "system",
"content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
},
{
"role": "user",
"content": "What are some potential applications for quantum computing?",
},
]
prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
output = generate(
prompt,
max_new_tokens=256,
do_sample=True,
temperature=0.65,
top_k=35,
top_p=0.55,
repetition_penalty=1.176,
)
print(output[0]["generated_text"])
This model was trained with SFT Trainer and DPO Trainer, in several sessions, using the following settings:
For Supervised Fine-Tuning:
| Hyperparameter | Value |
|---|---|
| learning_rate | 2e-5 |
| total_train_batch_size | 24 |
| max_seq_length | 2048 |
| weight_decay | 0 |
| warmup_ratio | 0.02 |
For Direct Preference Optimization:
| Hyperparameter | Value |
|---|---|
| learning_rate | 7.5e-7 |
| total_train_batch_size | 6 |
| max_length | 2048 |
| max_prompt_length | 1536 |
| max_steps | 200 |
| weight_decay | 0 |
| warmup_ratio | 0.02 |
| beta | 0.1 |
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 28.49 |
| AI2 Reasoning Challenge (25-Shot) | 20.39 |
| HellaSwag (10-Shot) | 26.54 |
| MMLU (5-Shot) | 25.75 |
| TruthfulQA (0-shot) | 47.27 |
| Winogrande (5-shot) | 50.99 |
| GSM8k (5-shot) | 0.00 |
Base model
Felladrin/Minueza-32M-Base