Meta is dealing with an internal AI security incident after a rogue agent reportedly exposed sensitive company information to employees without proper access. The issue surfaced when an engineer used an AI system to analyse a technical query.
Reportedly, the incident occurred inside Meta's internal systems and continued for two hours. The incident raised new security concerns about artificial intelligence and its potential to create uncontrolled AI systems.
A Meta employee posted a standard technical inquiry on an internal platform according to reports. Another engineer then asked an AI agent to help interpret the query. The AI system produced an unexpected output which provided an incorrect answer to the problem.
The response resulted in an accidental disclosure of extensive confidential internal information. Engineers who lacked proper authorisation gained access to the information. Meta classified the issue as a “Sev 1", marking it as one of the more serious internal security incidents.
Meta spokesperson Tracy Clayton told The Verge that no user data was compromised. The employee understood they were using an automated system because the AI system answered their requests.
Clayton explained that the situation would not have occurred if people had conducted more verification procedures before they used the AI system's results.
Reportedly, similar issues have occurred before within Meta. Summer Yue, Director of Meta Superintelligence, Safety and Alignment, also presented related concerns to the public.