Health was the hot topic of debate for Copilot mobile users in 2025. This marks a significant shift as conversational AI is becoming increasingly important to many people’s medical journeys. Users are seeking customized answers for their specific circumstances; however, the quality of these AI-powered medical interactions is vital to individual wellbeing. In a recent LinkedIn post, Mustafa Suleyman shared results from a new paper showing that nearly 1 in 5 conversations involve users describing symptoms, trying to understand personal test results, or seeking advice on managing ongoing conditions.
Conversational AI refers to technologies such as chatbots and virtual assistants, that enable machines to understand, process and respond to human text in a human-like manner. Unlike a web search, the chat interface supports multi-turn dialogue in which users can ask a question and add context, producing responses tailored to their specific situation. This marks a new era of individuals seeking assistance from chatbots and receiving answers in a natural-human-like way.
Copilot is not intended to replace professional medical advice; however, it is crucial to understand its role in bridging the gap between user inquiries and professional care. As a modern model for health information, conversational AI is seeing usage patterns that are likely to continue to evolve. This paper extends the methodology of the Copilot Usage Report by including sampled conversations from January 2026, across all enterprise, educational and commercial accounts. The sample is global, with approximately 22% of conversations originating from the United States and the remainder distributed worldwide.
According to the research team, there is a significant responsibility to provide accurate answers. Our health team focuses on grounding Copilot responses in credible medical sources and helping users find providers for real-life care. Research observed that the emotional wellbeing pattern deserves particular attention. The evening increase in emotional health queries is consistent with this need; this pattern is likely multiple determined, as people often have more time for personal reflection in the evening, and the reduced availability of professional support may itself prompt queries that would otherwise be directed to a clinician.
One in seven queries were asked about symptoms and conditions on behalf of someone else-such as a child, an aging parent, a partner. This finding reframes how we should think about health AI users: the person typing is not always the person the query is about. This has further safety implications: as information provided about a dependent may be less accurate or complete. Future research should explore how health AI usage differs across regions and healthcare systems; specifically comparing settings with strong primary car access to those without will be essential for responsible global deployment. Ultimately, this study clearly defines the categories where the consequences of conversational AI responses are highest and where the investment in determining response quality must be prioritized.