MedQuAD LoRA r=4

Configuraci贸n

  • Base: mistralai/Mistral-7B-Instruct-v0.3
  • LoRA r: 4
  • M贸dulos: q_proj, k_proj, v_proj
  • 4-bit NF4
  • Early Stopping: patience=3

Entrenamiento

Training logs (manual, Epoch estimado):

Step Epoch Training Loss Validation Loss
100 0.046 0.828600 0.803454
200 0.093 0.777600 0.771947
300 0.139 0.769300 0.762315
400 0.186 0.743100 0.748655
500 0.232 0.735500 0.736502
600 0.279 0.747600 0.731061
700 0.325 0.724700 0.712283
800 0.371 0.731100 0.711445
900 0.418 0.714400 0.695680
1000 0.464 0.696800 0.691712
1100 0.511 0.691600 0.686753
1200 0.557 0.662500 0.675322
1300 0.604 0.665600 0.674704
1400 0.650 0.669800 0.665284
1500 0.696 0.615200 0.659309
1600 0.743 0.610000 0.657043
1700 0.789 0.617000 0.651174
1800 0.836 0.620500 0.647198
1900 0.882 0.616600 0.645843
2000 0.929 0.607800 0.643516
2100 0.975 0.612100 0.641554

Uso

from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained('mistralai/Mistral-7B-Instruct-v0.3', load_in_4bit=True)
model = PeftModel.from_pretrained(base, 'CHF0101/medquad-lora-r4-best')
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support