File size: 1,679 Bytes
72e8459 e24cc21 72e8459 e24cc21 72e8459 e24cc21 72e8459 e24cc21 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
language:
- en
license: apache-2.0
tags:
- text-generation
- llama
- qlora
- peft
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
datasets:
- HuggingFaceH4/ultrachat_200k
---
# hoangtung386/TinyLlama-1.1B-qlora
Fine-tuned version of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) using QLoRA.
## Model Details
- **Base Model:** TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
- **Method:** QLoRA (Quantized Low-Rank Adaptation)
- **Dataset:** HuggingFaceH4/ultrachat_200k
- **Training Samples:** 5,000
## Training Configuration
### LoRA Config
```yaml
r: 64
lora_alpha: 32
lora_dropout: 0.1
target_modules: {'k_proj', 'gate_proj', 'up_proj', 'down_proj', 'v_proj', 'q_proj', 'o_proj'}
```
### Training Args
```yaml
learning_rate: 0.0002
epochs: 3
batch_size: 2
gradient_accumulation: 4
optimizer: OptimizerNames.PAGED_ADAMW
scheduler: SchedulerType.COSINE
```
## Training Results
| Metric | Value |
|--------|-------|
| Loss | 1.2668 |
| Runtime | 7698.13s |
| Samples/sec | 1.95 |
| Steps | N/A |
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")
model = AutoModelForCausalLM.from_pretrained("hoangtung386/TinyLlama-1.1B-qlora")
prompt = "<|user|>\nWhat is AI?</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Framework Versions
- Transformers: 4.41.2
- PyTorch: 2.5.1+cu124
- PEFT: 0.11.1
- TRL: 0.9.4
|