qwen3-0-6b โ€” Cybersecurity QA (FULL)

Fine-tuned on Kaggle using FULL.

Model Summary

  • Base: unsloth/Qwen3-0.6B
  • Trainable params: 596,049,920 / total 596,049,920
  • Train wall time (s): 30805.9
  • Files: pytorch_model.safetensors + config.json + tokenizer files

Data

  • Dataset: zobayer0x01/cybersecurity-qa
  • Samples: total=42484, train=38235, val=2000
  • Prompting: Chat template with a fixed system prompt:
You are a helpful assistant specialized in cybersecurity Q&A.

Training Config

Field Value
Method FULL
Precision fp32
Quantization none
Mode steps
Num Epochs 1
Max Steps 2500
Eval Steps 400
Save Steps 400
LR 1e-05
Max Length 768
per_device_batch_size 1
grad_accum 8

Evaluation (greedy)

Metric Score
BLEU-4 1.26
ROUGE-L 14.00
F1 26.62
EM (Exact Match) 0.00

Notes: We normalize whitespace/punctuations, compute token-level P/R/F1, and use evaluate's sacrebleu/rouge/chrf.

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("nhonhoccode/qwen3-0-6b-cybersecqa-fullft-20251103-1057")
mdl = AutoModelForCausalLM.from_pretrained("nhonhoccode/qwen3-0-6b-cybersecqa-fullft-20251103-1057")
prompt = tok.apply_chat_template(
    [{"role":"system","content":"You are a helpful assistant specialized in cybersecurity Q&A."},
     {"role":"user","content":"Explain SQL injection in one paragraph."}],
    tokenize=False, add_generation_prompt=True
)
ids = tok(prompt, return_tensors="pt").input_ids
out = mdl.generate(ids, max_new_tokens=128, do_sample=False)
print(tok.decode(out[0][ids.shape[-1]:], skip_special_tokens=True))

Intended Use & Limitations

  • Domain: cybersecurity Q&A; not guaranteed to be accurate for legal/medical purposes.
  • The model can hallucinate or produce outdated guidanceโ€”verify before applying in production.
  • Safety: No explicit content filtering. Add guardrails (moderation, retrieval augmentation) for deployment.

Reproducibility (env)

  • transformers>=4.43,<5, accelerate>=0.33,<0.34, peft>=0.11,<0.13, datasets>=2.18,<3, evaluate>=0.4,<0.5, rouge-score, sacrebleu, huggingface_hub>=0.23,<0.26, bitsandbytes
  • GPU: T4-class; LoRA recommended for low VRAM.

Changelog

  • 2025-11-03 10:58 โ€” Initial release (FULL)
Downloads last month
37
Safetensors
Model size
0.6B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for nhonhoccode/qwen3-0-6b-cybersecqa-fullft-20251103-1057

Finetuned
Qwen/Qwen3-0.6B
Finetuned
unsloth/Qwen3-0.6B
Finetuned
(140)
this model