koenshen commited on
Commit
ec8a6ed
·
verified ·
1 Parent(s): fee85bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -15,8 +15,10 @@ size_categories:
15
  - 10K<n<100K
16
  ---
17
  # EVADE: Multimodal Benchmark for Evasive Content Detection in E-Commerce Applications
18
- [**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
19
  </br>
 
 
20
  <img src="framework.jpg"/>
21
 
22
  **This dataset contains the following fields**
 
15
  - 10K<n<100K
16
  ---
17
  # EVADE: Multimodal Benchmark for Evasive Content Detection in E-Commerce Applications
18
+ [**🤗 Dataset**](https://huggingface.co/datasets/koenshen/EVADE-Bench) | [**Paper**](https://www.arxiv.org/abs/2505.17654) | [**GitHub**](https://github.com/koenshen/EVADE-Bench)
19
  </br>
20
+ E-commerce platforms increasingly rely on Large Language Models (LLMs) and Vision–Language Models (VLMs) to detect illicit or misleading product content. However, these models remain vulnerable to \emph{evasive content}: inputs (text or images) that superficially comply with platform policies while covertly conveying prohibited claims. Unlike traditional adversarial attacks that induce overt failures, evasive content exploits ambiguity and context, making it far harder to detect. Existing robustness benchmarks provide little guidance for this high-stakes, real-world challenge. We introduce \textbf{EVADE}, the first expert-curated, Chinese, multimodal benchmark specifically designed to evaluate foundation models on evasive content detection in e-commerce. The dataset contains 2,833 annotated text samples and 13,961 images spanning six high-risk product categories, including body shaping, height growth, and health supplements. Two complementary tasks assess distinct capabilities: \emph{Single-Risk}, which probes fine-grained reasoning under short prompts, and \emph{All-in-One}, which tests long-context reasoning by merging overlapping policy rules into unified instructions. Notably, the All-in-One setting significantly narrows the performance gap between partial and exact-match accuracy, suggesting that clearer rule definitions improve alignment between human and model judgment. We benchmark 26 mainstream LLMs and VLMs and observe substantial performance gaps: even state-of-the-art models frequently misclassify evasive samples. Through detailed error analysis, we identify critical challenges including metaphorical phrasing, misspelled or homophonic terms, and optical character recognition (OCR) limitations in VLMs. Retriever-Augmented Generation (RAG) further improves model performance in long-context scenarios, indicating promise for context-aware augmentation strategies. By releasing EVADE and strong baselines, we provide the first rigorous standard for evaluating evasive-content detection, expose fundamental limitations in current multimodal reasoning, and lay the groundwork for safer and more transparent content moderation systems in e-commerce.
21
+
22
  <img src="framework.jpg"/>
23
 
24
  **This dataset contains the following fields**