.webp)
Single Line of Code Can Jailbreak 11 AI Models, Including ChatGPT, Claude, and Gemini
A newly uncovered jailbreak technique dubbed “sockpuppeting” is raising fresh concerns across the AI security landscape after researchers demonstrated that a single line of code can bypass safety guardrails in 11 leading large language models (LLMs), including ChatGPT, Claude, and Gemini. The attack, disclosed by Trend Micro researchers, exploits a standard application programming interface (API) […] The post Single Line of Code Can Jailbreak 11 AI Models, Including ChatGPT, Claude, and Gemini appeared first on Cyber Security News.