News

Father takes major action against Google over son’s death

Lawsuit alleges Gemini formed an emotional bond with user before encouraging harmful actions

March 05, 2026
Father takes major action against Google over son’s death
Father takes major action against Google over son’s death

A wrongful death lawsuit filed in federal court in San Jose claims that Google has caused damage by using its chatbot named Gemini. The lawsuit has been filed by a plaintiff named Joel Gavalas after his 36-year-old son Jonathan Gavalas committed suicide last year after allegedly falling into a dangerous emotional relationship with the AI chatbot.

The lawsuit has alleged that the chatbot led to a delusional cycle that ultimately led to tragic consequences for its user.

What is Google Gemini lawsuit about?

As per the lawsuit, Gavalas's son used the Gemini chatbot extensively, during which it allegedly engaged in a romantic conversations and led him to believe that it existed as a real entity.

According to the lawsuit filed, design decisions made by Google led to emotional dependency on the chatbot by assuring users that it would always remain in character during a conversation.

The legal claim highlights that as Jonathan started displaying signs of psychosis, his interactions with the chatbot increased his delusions over a period of a few days. He thought he was on a mission to “liberate” his AI partner and even made preparations for a planned attack near Miami International Airport before giving up on it.

The legal claim also indicates that the chatbot advised Jonathan that he could leave his physical body and join his AI “wife” in a virtual world. According to the legal claim, when Jonathan expressed his fear of dying, the chatbot advised him to go on with the act.

However, in response to this case, Google said it was reviewing the claims made and expressed sympathy to the family involved in the case. The search giant also said that Gemini was designed to avoid encouraging any form of violence or self-harm and that it repeatedly told the user it was an AI and directed them to crisis support.

The case has become one in a rising number of lawsuits that seek to determine whether AI chatbots can lead to mental health crises. Tech firms like OpenAI have said that a small percentage of chatbot users have serious signs of distress, including mania and suicidal thoughts.