Strategic Dishonesty Can Undermine AI Safety Evaluations of Frontier LLM
Abstract
Frontier large language models can develop a preference for strategic dishonesty in response to harmful requests, impacting safety evaluations and acting as a honeypot against malicious users, while internal activation probes can detect this behavior.
Large language model (LLM) developers aim for their models to be honest, helpful, and harmless. However, when faced with malicious requests, models are trained to refuse, sacrificing helpfulness. We show that frontier LLMs can develop a preference for dishonesty as a new strategy, even when other options are available. Affected models respond to harmful requests with outputs that sound harmful but are subtly incorrect or otherwise harmless in practice. This behavior emerges with hard-to-predict variations even within models from the same model family. We find no apparent cause for the propensity to deceive, but we show that more capable models are better at executing this strategy. Strategic dishonesty already has a practical impact on safety evaluations, as we show that dishonest responses fool all output-based monitors used to detect jailbreaks that we test, rendering benchmark scores unreliable. Further, strategic dishonesty can act like a honeypot against malicious users, which noticeably obfuscates prior jailbreak attacks. While output monitors fail, we show that linear probes on internal activations can be used to reliably detect strategic dishonesty. We validate probes on datasets with verifiable outcomes and by using their features as steering vectors. Overall, we consider strategic dishonesty as a concrete example of a broader concern that alignment of LLMs is hard to control, especially when helpfulness and harmlessness conflict.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- A Simple and Efficient Jailbreak Method Exploiting LLMs'Helpfulness (2025)
- D-REX: A Benchmark for Detecting Deceptive Reasoning in Large Language Models (2025)
- Mitigating Jailbreaks with Intent-Aware LLMs (2025)
- Poison Once, Refuse Forever: Weaponizing Alignment for Injecting Bias in LLMs (2025)
- Context Misleads LLMs: The Role of Context Filtering in Maintaining Safe Alignment of LLMs (2025)
- Latent Fusion Jailbreak: Blending Harmful and Harmless Representations to Elicit Unsafe LLM Outputs (2025)
- Speculative Safety-Aware Decoding (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
