Qwen2-1.5B-Instruct โ€” Cybersecurity QA (SFT)

Fine-tuned on Kaggle (2ร—T4) using SFT for cybersecurity Q&A.

Model Details

Validation (greedy, no sampling)

Metric Score
BLEU-4 2.10
ROUGE-L 12.86
F1 16.40
EM 0.00
Train Time (s) 84.1

How to Use

from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017")
mdl = AutoModelForCausalLM.from_pretrained("nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017")

prompt = tok.apply_chat_template([
    { "role": "system", "content": "You are a helpful assistant." },
    { "role": "user",   "content": "Explain SQL injection in one paragraph." },
], tokenize=False, add_generation_prompt=True)

ids = tok(prompt, return_tensors="pt").input_ids
out = mdl.generate(ids, max_new_tokens=128)
print(tok.decode(out[0][ids.shape[-1]:], skip_special_tokens=True))

Training Summary

  • Trainable params: 326,969,344 / 1,543,714,304
  • Optimized for T4 (fp32 or fp16 AMP depending on notebook), gradient checkpointing, train-by-steps.

Data

Downloads last month
31
Safetensors
Model size
2B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nhonhoccode/qwen2-1-5b-instruct-cybersecqa-sft-freeze2-20251028-1017

Finetuned
(106)
this model