Technology

AI finally learns to say ‘I don’t know’ in new way to tackle chatbot hallucinations

AI models, including those like OpenAI's ChatGPT, are prone to hallucinations or psychosis

Published May 11, 2026
AI finally learns to say ‘I don’t know’ in new way to tackle chatbot hallucinations
AI finally learns to say ‘I don’t know’ in new way to tackle chatbot hallucinations

South Korean researchers have developed a novel training method enabling AI to admit their unfamiliarity with topics instead of giving wrong answers.

The model proves to be a major breakthrough in the wake of growing chatbot hallucination. Under AI overconfidence, current AI models often make up facts because they are incentivized to provide answers rather than admit ignorance.

Advertisement

This overconfidence is particularly dangerous in high-stakes fields like autonomous driving and medical diagnosis, where an incorrect but "confident" answer can have dire consequences.

Warm-up training phase

At the heart of this breakthrough lies a new technique named “warm-up training” mirroring human brain development. In this phase, the AI’s neural network is often exposed to random noise inputs prior to actual data learning.

To develop this method, researchers used the human brain as a text book, understanding how the brain generates internal signals without external input to help manage uncertainty.

“While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’,” researchers said.

This process is designed to teach the model a baseline state of "I don't know anything yet," effectively establishing a low confidence level and reducing overconfidence bias.

According to Se-Bum Paik, an author of the study published in the journal Nature Machine Intelligence, "This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans.”

"This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer.”

Aqsa Qaddus Tahir
Aqsa Qaddus Tahir is a reporter dedicated to science coverage, exploring breakthroughs, emerging research, and innovation. Her work centres on making scientific developments understandable and relevant, presenting well-researched stories that connect complex ideas with everyday life in a clear, engaging, and informative manner.
Share this story: