
Critical vLLM Flaw Exposes Millions of AI Servers to Remote Code Execution
A newly disclosed security flaw has placed millions of AI servers at risk after researchers identified a critical vulnerability in vLLM, a widely deployed Python package for serving large language models.
The issue, tracked as CVE-2026-22778 (GHSA-4r2x-xpjr-7cvv), enables remote code execution (RCE) by submitting a malicious video URL to a vulnerable vLLM API endpoint. The vulnerability affects vLLM versions 0.8.3 through 0.14.0 and was patched in version 0.14.1.
The disclosure was released as breaking news and is still developing, with additional technical details expected as the investigation continues. Due to vLLM’s scale of adoption, reportedly exceeding three million downloads per...