Qwen2-1.5B-Instruct โ Cybersecurity QA (SFT)
Fine-tuned on Kaggle (2รT4) using SFT for cybersecurity Q&A.
Model Details
- Base: Qwen/Qwen2-1.5B-Instruct
- Method: SFT (freeze last 2 blocks + lm_head)
- Max length: 1024
- Early stopping: yes
Validation (greedy, no sampling)
| Metric | Score |
|---|---|
| BLEU-4 | 2.10 |
| ROUGE-L | 12.86 |
| F1 | 16.40 |
| EM | 0.00 |
| Train Time (s) | 84.1 |
How to Use
from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017")
mdl = AutoModelForCausalLM.from_pretrained("nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017")
prompt = tok.apply_chat_template([
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Explain SQL injection in one paragraph." },
], tokenize=False, add_generation_prompt=True)
ids = tok(prompt, return_tensors="pt").input_ids
out = mdl.generate(ids, max_new_tokens=128)
print(tok.decode(out[0][ids.shape[-1]:], skip_special_tokens=True))
Training Summary
- Trainable params: 326,969,344 / 1,543,714,304
- Optimized for T4 (fp32 or fp16 AMP depending on notebook), gradient checkpointing, train-by-steps.
Data
- Dataset: zobayer0x01/cybersecurity-qa
- Downloads last month
- 31
Model tree for nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017
Base model
Qwen/Qwen2-1.5B-Instruct