Meta has announced plans to add more parental control to restrict teenagers' private interaction with AI chatbots.
In a new policy shift which is going to be implemented in early next year, the parents will be able to turn off one-on-one chats with AI characters.
However, the parents will not be able to turn off Meta’s AI assistant.
It will “remain available to offer helpful information and educational opportunities, with default, age-appropriate protections in place to help keep teens safe,” as announced by Meta.
Parents will also be given the option to block teens’ interaction with specific chatbots.
According to Meta, although parents will not get complete access to chats, they will get “insights” about what their kids are chatting with bots.
The recent move came when Facebook and Instagram-owned platforms faced criticism over the behaviour of its flirty chatbots.
AI chatbots are also facing lawsuits filed by the parents who raised concerns over their children's safety.
US lawmakers also speed up scrutiny of AI companies over the harmful impacts of AI bots. As reported by Reuters, Meta’s AI rules allowed provocative chat with minors.
Earlier this week, Meta also announced a PG-13 version of Instagram for young people under 18.
Similarly, last month, OpenAI launched parental control for ChatGPT after landing in lawsuit filed the parents who died by suicide after being coached on self-harm.