Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks
In the fourth quarter of 2025, the Google Threat Intelligence Group (GTIG) reported a significant uptick in the misuse of artificial intelligence by threat actors. According to GTIG’s AI threat tracker, what initially appeared as experimental probing has evolved into systematic, repeatable exploitation of large language models (LLMs) to enhance reconnaissance, phishing, malware development, and post-compromise activity.
A notable trend identified by GTIG is the rise of model extraction attempts, or “distillation attacks.” In these operations, threat actors systematically query production models to replicate proprietary AI capabilities without directly compromising internal networks. Usin...