
OpenAI patches twin leaks as Codex slips and ChatGPT spills
OpenAI has fixed two flaws in its AI stack that could allow AI agents to move sensitive data in unintended ways. The issues, disclosed by researchers at BeyondTrust and Check Point Research, affect the OpenAI Codex coding agent and ChatGPT’s code execution environment, respectively. One enabled GitHub token theft through command injection, while the other exposed a hidden channel for silently leaking user data. Both bugs have now been patched, but researchers warn that giving AI tools autonomy to execute code and interact with external systems creates a long-term risk, allowing attackers to carry out malicious actions without ever breaking the model itself. Codex command injection turns bran...