Character-Level Perturbations Disrupt LLM Watermarks
Abstract
Character-level perturbations and genetic algorithm-based attacks demonstrate vulnerabilities in LLM watermarking systems under realistic threat models.
Large Language Model (LLM) watermarking embeds detectable signals into generated text for copyright protection, misuse prevention, and content detection. While prior studies evaluate robustness using watermark removal attacks, these methods are often suboptimal, creating the misconception that effective removal requires large perturbations or powerful adversaries. To bridge the gap, we first formalize the system model for LLM watermark, and characterize two realistic threat models constrained on limited access to the watermark detector. We then analyze how different types of perturbation vary in their attack range, i.e., the number of tokens they can affect with a single edit. We observe that character-level perturbations (e.g., typos, swaps, deletions, homoglyphs) can influence multiple tokens simultaneously by disrupting the tokenization process. We demonstrate that character-level perturbations are significantly more effective for watermark removal under the most restrictive threat model. We further propose guided removal attacks based on the Genetic Algorithm (GA) that uses a reference detector for optimization. Under a practical threat model with limited black-box queries to the watermark detector, our method demonstrates strong removal performance. Experiments confirm the superiority of character-level perturbations and the effectiveness of the GA in removing watermarks under realistic constraints. Additionally, we argue there is an adversarial dilemma when considering potential defenses: any fixed defense can be bypassed by a suitable perturbation strategy. Motivated by this principle, we propose an adaptive compound character-level attack. Experimental results show that this approach can effectively defeat the defenses. Our findings highlight significant vulnerabilities in existing LLM watermark schemes and underline the urgency for the development of new robust mechanisms.
Community
Our paper Character-Level Perturbations Disrupt LLM Watermarks has been accepted to the Network and Distributed System Security (NDSS) Symposium 2026. Large Language Model (LLM) watermarking, which embeds detectable signals during text generation, has been regarded as a promising solution for copyright protection, misuse prevention, and AI-generated content detection. However, a key challenge lies in accurately assessing the robustness of watermark schemes. Current evaluations rely on watermark removal attacks, yet most existing attacks are suboptimal, leading to a misconception that successful removal always requires either large perturbation budgets or powerful adversaries’ capabilities.
In this work, we systematically investigate the robustness of LLM watermarking:
We formalize the system model and define two realistic threat models with limited detector access.
We analyze different perturbation types and demonstrate that character-level perturbations (e.g., typos, deletions, homoglyphs) achieve stronger removal performance by disrupting tokenization, allowing a single modification to affect multiple tokens.
We propose a reference-detector-guided genetic algorithm to optimize perturbations, and design a compound character-level attack that effectively bypasses potential defenses.
Experiments on five representative watermarking schemes and two widely used LLMs consistently confirm the superiority of character-level perturbations. Our findings highlight critical vulnerabilities in current watermarking techniques and emphasize the urgent need for more robust mechanisms
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper