
Single Line of Code Can jailbreak 11 AI models Including ChatGPT, Claude, and Gemini
A newly detailed jailbreak technique known as “sockpuppeting” allows attackers to bypass the safety guardrails of 11 major large language models (LLMs) using a single line of code. Unlike complex attacks, this method exploits APIs that support assistant prefill to inject fake acceptance messages, forcing models to answer prohibited requests. The attack exploits “assistant prefill,” […] The post Single Line of Code Can jailbreak 11 AI models Including ChatGPT, Claude, and Gemini appeared first on Cyber Security News.