Instagram will begin alerting parents when teenagers repeatedly search for suicide or self-harm terms. Meta provides parents with a notification system through its parental supervision tools, which send alerts to parents via email, text and WhatsApp and the Instagram app.
The alerts are designed to help parents support their teens and provide resources for mental health assistance.
The alerts are triggered if teens search for concerning phrases within a short period, including “suicide” or "self-harm", or language suggesting they may want to harm themselves. Meta calls this a “starting point", acknowledging some alerts may not indicate real danger. Both parents and teens must opt into Instagram’s supervision tools for the system to function.
The rollout comes as Meta faces multiple trials alleging that platforms like Instagram harm young users’ mental health. Experts have compared these cases to the social media industry’s “big tobacco” moment. Meta CEO Mark Zuckerberg recently testified that app makers rely on operating systems and app stores for age verification.
Meta is reportedly planning to extend these notifications update to its other platforms, alerting parents if teens are having conversations about suicide or self-harm with the company's chatbots.
The company is working on a new AI model, codenamed Avocado, which is expected to be released later this year, due to growing concerns about AI and teen mental health.
Parents receiving alerts will get explanations of their teen’s search behaviour and access to additional resources. Meta emphasises the alerts are intended as guidance, helping guardians intervene early if needed.