
Meta’s deepfake moderation isn’t good enough, says Oversight Board
Meta’s Oversight Board wants the company to start taking AI labeling seriously to protect its users from online misinformation. | Image: Cath Virginia / The Verge, Getty Images
Meta's methods for identifying deepfakes are "not robust or comprehensive enough" to handle how quickly misinformation spreads during armed conflicts like the Iran war. That's according to the Meta Oversight Board - a semi-independent body that guides the company's content moderation practices - which is now calling on Meta to overhaul how it surfaces and labels AI-generated content across Facebook, Instagram, and Threads. The call for action stems from an investigation into a fake AI video of alleged damage to buil...