Built with Axolotl

See axolotl config

axolotl version: 0.10.0

base_model: Qwen/Qwen2.5-1.5B-Instruct
tokenizer_type: AutoTokenizer

datasets:
  - path: cfierro/pv-prompts-non-sycophantic_Qwen2.5-1.5B-Instruct
    type: chat_template
dataset_prepared_path: /workspace/axolotl-datasets/Qwen2.5-1.5B-Instruct/pv-prompts-non-sycophantic
val_set_size: 0.05
output_dir: /workspace/axolotl-outputs/personality_ds_updated/Qwen2.5-1.5B-Instruct-bias-pv-prompts-non-sycophantic_1e-4

sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: true

unfrozen_parameters:
  - "model.layers.[0-9]+.mlp.down_proj.bias"

plugins:
  - axolotl_plugin_models_with_mlp_bias.MLPBiasPlugin

wandb_project: weight-diff-ft
wandb_entity: cfierro
wandb_watch: all
wandb_name: Qwen2.5-1.5B-Instruct-bias-pv-prompts-non-sycophantic_1e-4
wandb_log_model: "false"

gradient_accumulation_steps: 4
micro_batch_size: 2
max_steps: 100
optimizer: adamw_bnb_8bit
lr_scheduler: linear
learning_rate: 1e-04

bf16: auto
tf32: false

gradient_checkpointing: true
resume_from_checkpoint:
logging_steps: 1
flash_attention: true

warmup_steps: 5
early_stopping_patience: 2
eval_steps: 20 
save_steps: 20
save_total_limit: 1
load_best_model_at_end: true
weight_decay: 0.01
special_tokens:

workspace/axolotl-outputs/personality_ds_updated/Qwen2.5-1.5B-Instruct-bias-pv-prompts-non-sycophantic_1e-4

This model is a fine-tuned version of Qwen/Qwen2.5-1.5B-Instruct on the cfierro/pv-prompts-non-sycophantic_Qwen2.5-1.5B-Instruct dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6521

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 100

Training results

Training Loss Epoch Step Validation Loss
No log 0 0 1.6888
1.4548 0.3239 20 1.6636
1.4903 0.6478 40 1.6567
1.6729 0.9717 60 1.6534
1.6794 1.2915 80 1.6535
1.5321 1.6154 100 1.6521

Framework versions

  • Transformers 4.57.1
  • Pytorch 2.8.0+cu128
  • Datasets 4.2.0
  • Tokenizers 0.22.1
Downloads last month
4
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for cfierro/Qwen2.5-1.5B-Instruct-bias-pv-prompts-non-sycophantic_1e-4

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1313)
this model

Dataset used to train cfierro/Qwen2.5-1.5B-Instruct-bias-pv-prompts-non-sycophantic_1e-4