AI prompts spark warning: Your chats could be used against you
Following the latest rulings, legal experts warn against relying on chatbots for advice, especially in medical, legal and financial matters
As the world is turning to Artificial Intelligence AI, experts warn that people should not rely on AI chatbots or turn for advice through this medium especially in medical, legal or financial matters.
Some U.S. lawyers are telling their clients not to treat AI chatbots like trusted confidants when their freedom or legal liability is on the line.
These warnings became more urgent after a federal judge in New York ruled this year that the former CEO of a bankrupt financial services company could not shield his AI chats from prosecutors pursuing securities fraud charges against him.
In the wake of the ruling, attorneys have been advising that conversations with chatbots like Anthropic's Claude and OpenAI's ChatGPT could be demanded by prosecutors in criminal cases or by litigation adversaries in civil cases.
"We are telling our clients: You should proceed with caution here," said Alexandria Gutiérrez Swette, a lawyer at New York-based law firm Kobre & Kim.
People's discussions with their lawyers are almost always deemed confidential under U.S. law.
But AI chatbots are not lawyers, and attorneys are instructing clients to take steps that could keep their communications with AI tools more private.
In emails to clients and advisories posted on their websites, more than a dozen major U.S. law firms have outlined advice for people and companies to decrease the chances of AI chats winding up in court.
Similar warnings are also appearing in hiring agreements by some firms with their clients.
For instance, New York-based stated in a recent client contract that sharing a lawyer's advice or communications with a chatbot could erase the legal protection known as attorney-client privilege that usually shields communications between lawyers and their clients.
Voluntarily revealing information from a lawyer to any third party can jeopardize the customary legal protections for those attorney communications.
Courts already are grappling with the growing use of artificial intelligence by lawyers and people representing themselves in legal cases, which among other things has led to legal filings containing made-up cases invented by AI.
Recently, Manhattan-based U.S. District Judge ruled in February that clients must hand over all documents generated by Anthropic's chatbot Claude related to their cases.
No attorney-client relationship exists "or could exist, between an AI user and a platform such as Claude," judge wrote.
Moreover the lawyers emphasized that ChatGPT and other generative AI programs "are tools, not persons."
However, representatives for OpenAI and Anthropic did not immediately respond to the claims. Meanwhile, the privacy and usage terms for both companies state that they may share user data with third parties
Notably, both AI platforms also state that they require users to consult a qualified professional before relying on their chatbots for legal advice.
-
Apple opposes EU measures to help AI rivals access Google services
-
WhatsApp to get ‘Incognito Chat’ as Meta expands private AI features
-
AutoScientist lets AI models train themselves faster
-
Alibaba shares fall after sharp decline in core profitability
-
Adobe Premiere is finally coming to Android this summer, Google confirms
-
Meta Connect 2026: Mark Zukerberg hints at major wearables
-
Claude Code's creator runs ‘few thousand’ AI agents overnight on his phone
-
China leads new healthcare alliance to expand its dominance across Asia-Pacific
