CWE Predictor - Vulnerability Classification Model

This model classifies vulnerability descriptions into Common Weakness Enumeration (CWE) categories. It's designed to help security professionals and developers quickly identify the type of vulnerability based on textual descriptions.

Model Details

Model Description

This is a fine-tuned DistilBERT model that predicts CWE (Common Weakness Enumeration) categories from vulnerability descriptions. The model was trained on a comprehensive dataset of CVE descriptions mapped to their corresponding CWE identifiers.

Key Features:

  • Classifies vulnerabilities into 232 distinct CWE categories

  • Trained on 111,640 vulnerability descriptions

  • Achieves 72.72% accuracy on validation set

  • Macro F1 score of 0.251 demonstrating balanced performance across classes

  • Lightweight and fast inference using DistilBERT architecture

  • Developed by: mulliken

  • Model type: DistilBERT (Transformer-based classifier)

  • Language(s) (NLP): English

  • License: Apache 2.0

  • Finetuned from model: distilbert/distilbert-base-uncased

Model Sources

Uses

Direct Use

This model can be used directly for:

  • Vulnerability Triage: Automatically classify security vulnerabilities reported in bug bounty programs or security audits
  • Security Analysis: Categorize CVE descriptions to understand vulnerability patterns
  • Automated Security Reporting: Generate CWE classifications for vulnerability reports
  • Security Research: Analyze trends in vulnerability types across codebases

Downstream Use

The model can be integrated into:

  • Security scanning tools and SAST/DAST platforms
  • Vulnerability management systems
  • Security information and event management (SIEM) systems
  • DevSecOps pipelines for automated vulnerability classification

Out-of-Scope Use

This model should NOT be used for:

  • Medical or safety-critical systems without additional validation
  • As the sole method for security assessment (should complement human expertise)
  • Classifying non-English vulnerability descriptions
  • Real-time security detection (model is designed for post-discovery classification)

Bias, Risks, and Limitations

Known Limitations

  • Class Imbalance: Some CWE categories are underrepresented in the training data, which may lead to lower accuracy for rare vulnerability types
  • Temporal Bias: Model trained on historical CVE data may not recognize newer vulnerability patterns
  • Language Limitation: Only trained on English descriptions
  • Context Loss: Limited to 512 tokens, longer descriptions are truncated

Risks

  • False negatives could lead to unidentified security vulnerabilities
  • Should not replace human security expertise
  • May not generalize well to proprietary or domain-specific vulnerability descriptions

Recommendations

  • Always use this model as a supplementary tool alongside human security expertise
  • Validate predictions for critical security decisions
  • Consider retraining or fine-tuning for domain-specific applications
  • Monitor model performance over time as new vulnerability types emerge

How to Get Started with the Model

Installation

pip install transformers torch

Quick Start

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("mulliken/cwe-predictor")
tokenizer = AutoTokenizer.from_pretrained("mulliken/cwe-predictor")

# Prediction function
def predict_cwe(text: str) -> str:
    encoded = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
    with torch.no_grad():
        logits = model(**encoded).logits
        pred_id = torch.argmax(logits, dim=-1).item()
    return model.config.id2label[pred_id]

# Example usage
vuln_description = "Buffer overflow in the authentication module allows remote attackers to execute arbitrary code."
cwe_prediction = predict_cwe(vuln_description)
print(f"Predicted CWE: {cwe_prediction}")

Example Predictions

examples = [
    "SQL injection vulnerability in login form allows attackers to bypass authentication",
    "Cross-site scripting (XSS) vulnerability in comment section",
    "Path traversal vulnerability allows reading arbitrary files",
    "Integer overflow in image processing library causes memory corruption"
]

for desc in examples:
    print(f"Description: {desc}")
    print(f"Predicted CWE: {predict_cwe(desc)}\n")

Training Details

Training Data

The model was trained on the CVE and CWE Mapping Dataset, which contains:

  • CVE descriptions from the National Vulnerability Database (NVD)
  • Corresponding CWE classifications
  • Dataset size: 124,045 examples after filtering
  • Training set: 111,640 examples
  • Validation set: 12,405 examples
  • Number of CWE classes: 232 (after removing generic categories like "NVD-CWE-Other" and "NVD-CWE-noinfo")

Training Procedure

Preprocessing

  1. Data Cleaning:

    • Removed entries with missing descriptions or CWE IDs
    • Filtered out generic CWE categories ("NVD-CWE-Other", "NVD-CWE-noinfo")
    • Removed CWE categories with only 1 example to ensure stratified splitting
  2. Tokenization:

    • Used DistilBERT tokenizer with max_length=512
    • Applied truncation for longer descriptions

Training Hyperparameters

  • Learning rate: 2e-5
  • Batch size: 2 per device with gradient accumulation of 8 (effective batch size: 16)
  • Number of epochs: 1
  • Weight decay: 0.01
  • Optimizer: AdamW
  • Training regime: fp32 with gradient checkpointing
  • Evaluation strategy: Every 1000 steps

Training Performance

  • Total training time: ~78 minutes (4712 seconds) (per epoch)
  • Training steps: 13,956
  • Training samples per second: 23.691
  • Final training loss: 1.134700
  • Best validation loss: 1.082806 (at step 6000)
  • Model size: ~268MB

Evaluation

Testing Data, Factors & Metrics

Testing Data

Validation set of 12,405 examples (10% stratified split from the training data)

Metrics

  • Accuracy: Overall correctness of predictions
  • Macro F1 Score: Unweighted mean of F1 scores for each class (ensures balanced performance across all CWE types)

Results

Step Training Loss Validation Loss Accuracy Macro F1
1000 1.044600 1.252940 0.704716 0.220344
2000 1.158700 1.188677 0.711326 0.229855
3000 1.119900 1.159229 0.719226 0.235295
4000 1.112600 1.119924 0.720193 0.242404
5000 1.110300 1.111053 0.722934 0.244389
6000 1.134700 1.082806 0.727207 0.251264

Summary

The model achieves 72.72% accuracy on the validation set with a macro F1 score of 0.251. The relatively lower F1 score reflects the challenge of classifying across 232 different CWE categories with varying representation in the dataset.

Model Examination

The model uses standard DistilBERT attention mechanisms to process vulnerability descriptions. Key observations:

  • The model learns to identify security-related keywords and patterns
  • Attention weights typically focus on vulnerability-specific terms (e.g., "overflow", "injection", "traversal")
  • Performance varies by CWE category based on training data representation

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: Apple Silicon (M-series chip)
  • Hours used: ~1.3 hours
  • Cloud Provider: Local training (no cloud provider)
  • Compute Region: N/A (local)
  • Carbon Emitted: Minimal (Apple Silicon is energy efficient, ~15W TDP)

Technical Specifications [optional]

Model Architecture and Objective

  • Base Architecture: DistilBERT (distilbert-base-uncased)
  • Task: Multi-class text classification
  • Number of labels: 232 CWE categories
  • Objective: Cross-entropy loss for sequence classification
  • Architecture modifications: Added classification head with 232 output classes

Compute Infrastructure

Local machine with Apple Silicon processor

Hardware

  • Device: Apple Silicon (MPS backend)
  • Memory management: PYTORCH_MPS_HIGH_WATERMARK_RATIO set to 0.0

Software

  • Framework: PyTorch with Hugging Face Transformers
  • Python version: 3.x
  • Key libraries: transformers, torch, datasets, scikit-learn, pandas, numpy

Citation

If you use this model in your research, please cite:

@misc{mulliken2024cwepredictcr,
  author = {mulliken},
  title = {CWE Predictor: A DistilBERT Model for Vulnerability Classification},
  year = {2024},
  publisher = {Hugging Face},
  howpublished = {\url{https://huggingface.co/mulliken/cwe-predictor}}
}

Glossary

  • CWE (Common Weakness Enumeration): A community-developed list of software and hardware weakness types
  • CVE (Common Vulnerabilities and Exposures): A list of publicly disclosed cybersecurity vulnerabilities
  • NVD (National Vulnerability Database): U.S. government repository of vulnerability management data
  • Macro F1: The unweighted mean of F1 scores calculated for each class independently
  • SAST/DAST: Static/Dynamic Application Security Testing

More Information

For questions, issues, or contributions, please visit the Hugging Face model page.

Model Card Authors

Model Card Contact

Please use the Hugging Face model repository's discussion section for questions and feedback: mulliken/cwe-predictor

Downloads last month
38
Safetensors
Model size
67.1M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mulliken/cwe-predictor

Finetuned
(10069)
this model

Dataset used to train mulliken/cwe-predictor

Evaluation results