
Stopping the quiet drift toward excessive agency with re-permissioning
In their infancy, LLM models were not difficult to contain. You gave a prompt; they responded, and if something was wrong it was usually “just text.” This could take the form of a summary that missed the best bits, a tone-deaf line or a wordy sentence. But then, agents were co-opted as the core reasoning layer inside AI agents, and the game changed overnight. Agents connect databases and business applications, interact with external systems and execute multi-step tasks. So, the question isn’t only, “How capable is the model?” The more important question I believe is, “How are AI agents being treated and permissioned inside your environment?” The failures that sting aren’t limited to moments ...