News

What you should know before asking an AI chatbot about your health

Experts say AI health chatbots can help explain medical information but should not replace doctors

March 03, 2026
What you should know before asking an AI chatbot about your health
What you should know before asking an AI chatbot about your health

As an increasing number of individuals are seeking medical information through AI chatbots, many tech giants like OpenAI and Anthropic have begun developing their respective health solutions. OpenAI launched ChatGPT Health in January, which uses medical records and wearable data and wellness app data to provide health answers.

Anthropic provides equivalent functionalities through its Claude chatbot. Both companies emphasise that their AI health chatbots cannot replace medical professionals and should not be used for diagnosis purposes.

AI health chatbots

Some experts believe AI tools provide better results than using random internet searches. University of California San Francisco's Dr Robert Wachter says that responsible usage can deliver personalised valuable information.

AI chatbots offer advanced capabilities because they enable users to receive test result explanations and track health data patterns and create doctor question lists.

The precision of answers to questions improves when users provide comprehensive details about their age and current medications and existing symptoms. Experts consider AI systems to be capable of generating both false information and deceptive content.

The Stanford University School of Medicine Dean Dr Lloyd Minor recommends that users maintain a healthy level of scepticism when using AI tools. Patients who experience serious symptoms should seek emergency medical help for their chest pain and shortness of breath and severe headaches.

A large language model (LLM) should not become the only source of information for making important health decisions.

Users should also understand that sharing medical data with AI companies is not protected under HIPAA, the federal health privacy law. OpenAI and Anthropic both maintain that they keep health data separate from their model training process yet their privacy practices differ from the standards which govern hospitals and doctors.

The University of Oxford researchers conducted a 2024 study which revealed that chatbot users did not make better health choices than people who used search engines or their own personal decision-making process. The AI system achieved a 95% success rate in recognising written medical scenarios but it encountered difficulties during actual human interactions.