A US lawyer is facing a court hearing of his own as his firm used the AI chatbot, ChatGPT for legal research.
A New York lawyer found himself in hot waters when the judge informed the lawyer that "a filing was found to reference example legal cases that did not exist".
The lawyer who used the tool told the court he was "unaware that its content could be false," BBC reported Sunday.
ChatGPT is an AI-operated chatbot that creates original text on request, but also warns users that it can "produce inaccurate information".
Originally, the case was about a man who sued an airline over an alleged personal injury and his legal team had submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward, the report said.
However, according to the airline's lawyers, they could not find several of the cases that were referenced in the brief.
"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote, ordering the man's legal team to explain itself.
After several filings, it was discovered that Peter LoDuca, the lawyer for the plaintiff, had not prepared the research but his colleague, Steven A Schwartz, from the same law firm, had used ChatGPT to look for similar previous cases.
Schwartz, who has been an attorney for more than 30 years, clarified in his statement: "Mr LoDuca had not been part of the research and had no knowledge of how it had been carried out."
He added that he "greatly regrets" relying on the chatbot, which he said he had never used for legal research before and was "unaware that its content could be false".
He has sworn never again to "supplement" his legal research with AI "without absolute verification of its authenticity".
At a hearing scheduled for 8 June, both solicitors from the company Levidow, Levidow & Oberman have been ordered by the judge to defend their conduct.
Since its debut in November 2022, ChatGPT has been used by millions of users. It is designed to imitate various writing styles and respond to queries in language that seems natural and human.
Previously, concerns have been raised at government-level, about the possible dangers of artificial intelligence (AI), including the possibility of bias and false information spreading.
Starlink had on Monday refused to obey Moraes' order for all internet providers to block domestic access to X
Move also led to freezing of Starlink's bank accounts in Brazil
WhatsApp is introducing new features that will be rolled out gradually to accounts in coming weeks
Pavel Durov's arrest is not the only headache the privately-owned service faces
Judge Moraes ruling can cause X to lose one of its largest markets as Musk calls him "evil dictator"
Option to allow users to personalise their communication with AI chatbot and ensure much more efficient interaction