DACTYL Text Detector

Training Configuration

{
    "training_split": "training",
    "evaluation_split": "testing",
    "results_path": "bce-bert-tiny.csv",
    "num_epochs": 5,
    "model_path": "prajjwal1/bert-tiny",
    "tokenizer": "prajjwal1/bert-tiny",
    "optimizer": "AdamW",
    "optimizer_type": "torch",
    "optimizer_args": {
        "lr": 2e-05,
        "weight_decay": 0.01
    },
    "loss_fn": "BCEWithLogitsLoss",
    "reset_classification_head": false,
    "loss_type": "torch",
    "loss_fn_args": {},
    "needs_loss_fn_as_parameter": false,
    "save_path": "ShantanuT01/dactyl-bert-tiny-pretrained",
    "training_args": {
        "batch_size": 64,
        "needs_sampler": false,
        "needs_index": false,
        "shuffle": true,
        "sampling_rate": null,
        "apply_sigmoid": false
    },
    "best_model_path": "best-berttiny-model"
}

Results

model AP Score AUC Score OPAUC Score TPAUC Score
DeepSeek-V3 0.987343 0.997304 0.992057 0.922638
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing 0.0216925 0.798882 0.620986 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing 0.389216 0.946976 0.826789 0.00114491
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing 0.021503 0.77606 0.574528 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing 0.0161741 0.849759 0.51721 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing 0.0118964 0.675985 0.523548 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing 0.207775 0.950475 0.734974 0
claude-3-5-haiku-20241022 0.969507 0.994995 0.979943 0.804754
claude-3-5-sonnet-20241022 0.983519 0.997616 0.989414 0.897083
gemini-1.5-flash 0.939633 0.991016 0.959301 0.606443
gemini-1.5-pro 0.866792 0.971608 0.915183 0.221832
gpt-4o-2024-11-20 0.953158 0.990735 0.969787 0.706982
gpt-4o-mini 0.993479 0.998986 0.995918 0.96024
llama-3.2-90b 0.90081 0.981106 0.935416 0.383667
llama-3.3-70b 0.956044 0.992973 0.970868 0.717314
mistral-large-latest 0.981385 0.996985 0.988099 0.884255
mistral-small-latest 0.988285 0.997515 0.992244 0.924431
overall 0.977568 0.981756 0.948344 0.499516
Downloads last month
208
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ShantanuT01/dactyl-bert-tiny-pretrained