Meta’s internal memo reveals how executives ignore safety warnings to push messenger encryption rollout despite risks to teen safety
The revelations come as Meta faces a wave of litigation and regulatory scrutiny regarding the safety and welfare of young users across its platforms.
Meta executives proceeded with the plan to encrypt the messaging services connected to its Facebook and Instagram apps despite internal warnings during the development of End-to-end encryption for Messenger, a move that conceals the social media giant’s ability to flag child-exploitation cases to law enforcement.
The filing made public on Friday demonstrated that emails, messages and briefing documents obtained during discovery for a lawsuit brought by New Mexico Attorney General Raul Torrez shed new light on the company’s internal analysis of the plan’s impact. The documents also reveal how senior policy executives are viewing it at the same time. Torrez alleges that Meta allowed predators unrestricted access to underage users and connected them with victims, commonly leading to real-world abuse and human trafficking.
The New Mexico filings show that senior Meta safety executives share these fears. While Mark Zuckerberg publicly claimed publicly the company was addressing the plan’s risks, his safety and policy teams were expressing concerns behind the scenes. According to a February 2019 email, internal briefings estimated that the company’s total reports of child nudity and sexual exploitation would have plummeted from 6.4 million from 18.4 million to just 6.4 millions of Messenger had been encrypted- a staggering 65% drop.
Special protocols required to bolster safety standards
The concerns originated with Jennifer Bickert and Antigone Davis, Meta’s Global Head of Safety, who pushed for additional safety features before 2023 launch of encrypted messaging on Facebook and Instagram in 2023. Messages are now encrypted by default; users can still report objectionable content to Meta for review. The company is efficiently working to develop specialized accounts for minors to prevent unauthorized adults from initiating contact. Meanwhile, safety executives warned that children could be groomed on Meta’s semi-public social media platforms and subsequently exploited via its private messaging service. By contrast, they noted that WhatsApp carries fewer risks as it is not directly connected to a social discovery platform.
-
China trains DeepSeek on Nvidia’s most advanced AI chip Blackwell despite US export ban
-
1 in 5 teens exposed to unwanted sexual content on Instagram, report finds
-
'Final throw of the dice': Paramount submits highest bid offer to Warner Bros. in last round
-
China rolls out new AI-powered 'DeepRare' technology for efficient medical diagnosis
-
OpenAI CEO calls AI water concerns ‘fake’
-
Apple iPhone 18 Pro series to launch in bold red colour: Report
-
Apple developing AI pendant powered by in-house visual models
-
New AGI benchmark: Demis Hassabis proposes ‘Einstein test’—Ultimate challenge to prove true intelligence
