DACTYL Text Detector

Training Configuration

{
    "training_split": "training",
    "evaluation_split": "testing",
    "results_path": "libauc-tinybert.csv",
    "num_epochs": 1,
    "model_path": "ShantanuT01/dactyl-tinybert-pretrained",
    "tokenizer": "huawei-noah/TinyBERT_General_4L_312D",
    "optimizer": "SOTAs",
    "optimizer_type": "libauc",
    "optimizer_args": {
        "lr": 1e-05
    },
    "loss_fn": "tpAUC_KL_Loss",
    "reset_classification_head": true,
    "loss_type": "libauc",
    "loss_fn_args": {
        "data_len": 466005
    },
    "needs_loss_fn_as_parameter": false,
    "save_path": "ShantanuT01/dactyl-tinybert-finetuned",
    "training_args": {
        "batch_size": 64,
        "needs_sampler": true,
        "needs_index": true,
        "shuffle": false,
        "sampling_rate": 0.5,
        "apply_sigmoid": true
    },
    "best_model_path": "best-tpauc-model-tinybert"
}

Results

model AP Score AUC Score OPAUC Score TPAUC Score
DeepSeek-V3 0.981321 0.996726 0.991002 0.921253
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing 0.0340303 0.703731 0.647861 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing 0.184745 0.86911 0.683087 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing 0.0173351 0.615211 0.57306 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing 0.108149 0.940023 0.769001 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing 0.0123406 0.58479 0.537483 0
ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing 0.311729 0.964433 0.818375 0.0128094
claude-3-5-haiku-20241022 0.961089 0.993304 0.977485 0.791218
claude-3-5-sonnet-20241022 0.978917 0.997052 0.989079 0.903623
gemini-1.5-flash 0.945428 0.991914 0.967267 0.69349
gemini-1.5-pro 0.887585 0.975313 0.931469 0.36524
gpt-4o-2024-11-20 0.956721 0.992097 0.974751 0.764793
gpt-4o-mini 0.986874 0.999249 0.995276 0.963878
llama-3.2-90b 0.913169 0.982064 0.947501 0.505202
llama-3.3-70b 0.957947 0.99155 0.97557 0.773428
mistral-large-latest 0.980996 0.997519 0.99041 0.915818
mistral-small-latest 0.9803 0.996372 0.990539 0.918353
overall 0.975576 0.977332 0.951238 0.537319
Downloads last month
6
Safetensors
Model size
14.4M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ShantanuT01/dactyl-tinybert-finetuned