qwen3-0-6b — Cybersecurity QA (LORA 8bit)

Fine-tuned on Kaggle using LORA. (Quant: LoRA + 8-bit (bnb int8))

Model Summary

  • Base: unsloth/Qwen3-0.6B
  • Trainable params: 10,092,544 / total 606,142,464
  • Train wall time (s): 26498.1
  • Files: adapter_model.safetensors + adapter_config.json (LoRA) + tokenizer files

Data

  • Dataset: zobayer0x01/cybersecurity-qa
  • Samples: total=42484, train=38235, val=2000
  • Prompting: Chat template with a fixed system prompt:
You are a helpful assistant specialized in cybersecurity Q&A.

Training Config

Field Value
Method LORA
Precision fp16
Quantization LoRA + 8-bit (bnb int8)
Mode steps
Num Epochs 1
Max Steps 2000
Eval Steps 400
Save Steps 400
LR 0.0001
Max Length 768
per_device_batch_size 1
grad_accum 8

Evaluation (greedy)

Metric Score
BLEU-4 1.27
ROUGE-L 14.07
F1 27.83
EM (Exact Match) 0.00

Notes: We normalize whitespace/punctuations, compute token-level P/R/F1, and use evaluate's sacrebleu/rouge/chrf.

How to use

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tok  = AutoTokenizer.from_pretrained("nhonhoccode/qwen3-0-6b-cybersecqa-lora-8bit-20251102-2209")
base = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-0.6B")
mdl  = PeftModel.from_pretrained(base, "nhonhoccode/qwen3-0-6b-cybersecqa-lora-8bit-20251102-2209")  # Loads LoRA adapter
prompt = tok.apply_chat_template(
    [{"role":"system","content":"You are a helpful assistant specialized in cybersecurity Q&A."},
     {"role":"user","content":"Explain SQL injection in one paragraph."}],
    tokenize=False, add_generation_prompt=True
)
ids = tok(prompt, return_tensors="pt").input_ids
out = mdl.generate(ids, max_new_tokens=128, do_sample=False)
print(tok.decode(out[0][ids.shape[-1]:], skip_special_tokens=True))

Intended Use & Limitations

  • Domain: cybersecurity Q&A; not guaranteed to be accurate for legal/medical purposes.
  • The model can hallucinate or produce outdated guidance—verify before applying in production.
  • Safety: No explicit content filtering. Add guardrails (moderation, retrieval augmentation) for deployment.

Reproducibility (env)

  • transformers>=4.43,<5, accelerate>=0.33,<0.34, peft>=0.11,<0.13, datasets>=2.18,<3, evaluate>=0.4,<0.5, rouge-score, sacrebleu, huggingface_hub>=0.23,<0.26, bitsandbytes
  • GPU: T4-class; LoRA recommended for low VRAM.

Changelog

  • 2025-11-02 22:09 — Initial release (LORA-8bit)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nhonhoccode/qwen3-0-6b-cybersecqa-lora-8bit-20251102-2209

Finetuned
Qwen/Qwen3-0.6B
Finetuned
unsloth/Qwen3-0.6B
Adapter
(17)
this model