AI toys for toddlers spark safety concerns
Early tests show young children struggle to communicate with AI-enabled toys during play
The University of Cambridge researchers are demanding stricter regulations for AI-powered toys because their research uncovered serious issues with toddler technology interaction.
The research, which was carried out over a year, involved children between the ages of three and five who played with an AI toy called Gabbo AI Toy. The study revealed that although the parents wanted their children to benefit from the toy in terms of learning, the children found it difficult to communicate with the toy.
The researchers tested Gabbo, which is a soft toy that has a voice-activated chatbot developed through OpenAI technology. The toy is designed to encourage conversation and imaginative play among preschoolers. The AI toy study showed that children experienced trouble using the system throughout their testing.
Gabbo frequently talked over them, failed to recognise interruptions and could not distinguish between adult and child voices. The toy also failed to identify interruptions. One child who was aged five said to the toy, “I love you,” but the toy responded in a robotic and confusing manner.
University of Cambridge Study Co-author Dr Emily Goldacre said the responses could be problematic for young children still learning social cues. She explained that AI-powered toys might “misread emotions or respond inappropriately", potentially leaving children without comfort or guidance when they express feelings.
The toy is produced by Curio, which said it recognises the responsibility of building AI toys for children. Curio told the BBC the company prioritises parental permission, transparency and control in its products and plans to study children’s interactions with AI further.
Joining the movement for safeguard protections, the Children's Commissioner Rachel de Souza stated that current AI tools used in preschool settings do not undergo the same rigorous safety evaluations as standard educational materials.
Researchers recommend that parents oversee their children when they use AI toys, which should remain in common areas. The researchers also recommend that parents should read privacy policies in detail before they permit their children to use the devices.
-
Inside Meta’s AI struggle: Why much-hyped model ‘Avocado’ is facing delays
-
Adobe's longtime CEO to exit role as AI disruption shakes software industry; Shares Fall 22%
-
Amazon withdraws from drone trade group 'Prime Air' over safety concerns
-
AI data centres become new frontline in modern warfare– Here’s why
-
LinkedIn among top sources for AI chatbots, study finds
-
Pentagon allows rare exemptions for Anthropic AI tools despite ban
-
AI chatbots give teens dangerous diet advice, study finds
-
What happens if ChatGPT gains access to your financial accounts? Experts are alarmed
