hoangtung386's picture
Upload fine-tuned model with QLoRA
e24cc21 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation
  - llama
  - qlora
  - peft
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
datasets:
  - HuggingFaceH4/ultrachat_200k

hoangtung386/TinyLlama-1.1B-qlora

Fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T using QLoRA.

Model Details

  • Base Model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
  • Method: QLoRA (Quantized Low-Rank Adaptation)
  • Dataset: HuggingFaceH4/ultrachat_200k
  • Training Samples: 5,000

Training Configuration

LoRA Config

r: 64
lora_alpha: 32
lora_dropout: 0.1
target_modules: {'k_proj', 'gate_proj', 'up_proj', 'down_proj', 'v_proj', 'q_proj', 'o_proj'}

Training Args

learning_rate: 0.0002
epochs: 3
batch_size: 2
gradient_accumulation: 4
optimizer: OptimizerNames.PAGED_ADAMW
scheduler: SchedulerType.COSINE

Training Results

Metric Value
Loss 1.2668
Runtime 7698.13s
Samples/sec 1.95
Steps N/A

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")
model = AutoModelForCausalLM.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")

prompt = "<|user|>\nWhat is AI?</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Framework Versions

  • Transformers: 4.41.2
  • PyTorch: 2.5.1+cu124
  • PEFT: 0.11.1
  • TRL: 0.9.4