DyCodeEval Collection DyCodeEval (ICML 2025) enables dynamic benchmarking for code LLMs. This collection features dynamic HumanEval and MBPP sets generated with Claude 3.5. • 3 items • Updated Jun 27 • 4
Dynamic Benchmarking of Reasoning Capabilities in Code Large Language Models Under Data Contamination Paper • 2503.04149 • Published Mar 6 • 6
RED QUEEN: Safeguarding Large Language Models against Concealed Multi-Turn Jailbreaking Paper • 2409.17458 • Published Sep 26, 2024 • 1
SRPO: Enhancing Multimodal LLM Reasoning via Reflection-Aware Reinforcement Learning Paper • 2506.01713 • Published Jun 2 • 48
A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment Paper • 2504.15585 • Published Apr 22 • 12
Euclid: Supercharging Multimodal LLMs with Synthetic High-Fidelity Visual Descriptions Paper • 2412.08737 • Published Dec 11, 2024 • 54