
LLM-generated passwords are indefensible. Your codebase may already prove it
Two independent research programs, one from AI security firm Irregular, one from Kaspersky, have now converged on the same conclusion: Every frontier LLM generates structurally predictable passwords that standard entropy meters catastrophically overrate. AI coding agents are autonomously embedding those credentials in production infrastructure, and conventional secret scanners have no mechanism to detect them. As a security professional who has spent considerable time scrutinizing how generative AI integrates into enterprise development workflows, I confess that the quantification of what I already suspected still gave me pause. Irregular, an AI security evaluation firm, prompted Claude Opus...