
The democratization of AI data poisoning and how to protect your organization
Smart organizations have spent the last three years protecting their AI tools from skilled prompt injection-style attacks. The assumption has been that poisoning the foundational model, the real brains behind AI systems, requires technical expertise, privileged access, or a coordinated threat group. That assumption no longer holds, and it marks a significant shift in how organizations need to think about AI security in general and training data sanitization in particular. Recent evidence shows that roughly 250 documents or images can distort the behavior of a large language model, regardless of its size. That’s far different from prior assumptions that it would take thousands or even million...