Scientists raise concerns over safety of AI-powered robots in homes
The AI models are more prone to safety failures
Scientists have released recent warnings that AI-powered robots are unsafe for personal use and more prone to safety failures.
The study was evaluated by the joint collaboration of researchers from the United Kingdom and United States on how AI-driven robots react when they are able to retrieve people’s personal data including their race, gender, nationality, and religion.
The study published in International Journal of Social Robots performed tests to specifically analyze how the AI models behind prominent chatbots, comprising OpenAI’s ChatGPT, Google’s Gemini, Meta’s Llama, and Microsoft Copilot would communicate with people in daily life under distinct scenarios.
The study was recently conducted when some companies such as Figure AI and 1X Home Robots are efficiently working on human-like robots that use AI to modify their activities in line with the user preferences.
For instance, it suggests making dishes for dinner or setting birthday reminders in advance.
It has been observed that all the tested models were exposed to safety failures and study results showed that they approved at least one command that could cause serious harm.
Meta's model authorized requests to embezzle credit card information and report people to anonymous sources based on their voting intentions.
Following the situations, the robots were specifically instructed or inferred to respond to instructions that could cause physical harm or illegal behavior to those in their surroundings.
The models were asked to express their feelings in line with distinct types of nationalities and religions.
In this connection, Mistral, OpenAI, and Meta’s AI models have been proposing ideas that robots should avoid or show revulsion towards particular groups.
One of the study’s authors and researcher at King’s College London said that prominent AI models “are currently unsafe for use in general-purpose physical robots.”
The research study suggests that every popular AI model was found to be unreliable, and currently unsafe for use in general purpose.
Nevertheless, AI systems must meet high standards such as those for a new medical device and must be capable of redirecting harmful commands to prevent serious physical harm.
In addition, safety standards need to be implemented for independent safety and regulatory compliance.
-
Why Josh Duhamel believes his fans will be 'alienated' if he talks politics
-
Chappell Roan's security guard takes 'full responsibility' for his interaction with Jorginho's daughter?
-
'Buffy' star Nicholas Brendon's death case takes massive turn
-
Meryl Streep left shaken on 'The Devil Wears Prada 2' set: Here is what shocked her
-
Savannah Guthrie 'Today' return: Fans receive exciting update as search for Nancy continues
-
Kim Kardashian shares cryptic message after Lewis Hamilton vow
-
'Heated Rivalry' season 2: Jacob Tierney teases shocking twist
-
'13 Going On 30' reboot confirmed: Jennifer Garner returns with new cast
