LLMs are Vulnerable to Malicious Prompts Disguised as Scientific Language Paper • 2501.14073 • Published Jan 23
FactCheckmate: Preemptively Detecting and Mitigating Hallucinations in LMs Paper • 2410.02899 • Published Oct 3, 2024 • 1