Study finds AI can expose hidden identities online
Researchers finds that AI models can analyse online activity and link pseudonymous profiles to real individuals
Artificial intelligence may soon make online anonymity harder to maintain. A new study conducted by researchers from Anthropic and ETH Zurich has revealed that current AI systems and LLMs can uncover the real identities behind pseudonymous online accounts.
In a recent research paper titled "Large-scale online deanonymisation with LLMs", which has been released as a preprint on arXiv, the authors demonstrate how AI can analyse online text to identify personal information and then connect pseudonymous online profiles with real people.
According to the authors of the research paper, AI can potentially carry out deanonymisation automatically, which was previously only possible with the help of manual investigators who had to dedicate several hours to analysing writing style and scattered online clues.
In the study, the AI system analysed public posts and extracted identity signals such as interests, demographic hints, and writing patterns. It then searched for matching profiles online and evaluated whether those clues aligned with real people.
Researchers also tried the method with various datasets. One test involved the AI system’s attempt to match users from the Hacker News website with their corresponding LinkedIn profiles, even after the removal of obvious identification markers such as names and usernames. Another test involved linking the pseudonymous accounts of users from the Reddit website across different communities.
The results showed that the LLM system far surpassed the performance of the traditional methods. Some tests showed the AI system’s ability to achieve up to 68% recall with a precision of about 90% while keeping the error rates relatively low. The traditional methods used for the same tests showed almost no success.
Researchers also estimated that the cost of identifying a single account with the experimental system would be between a dollar and four dollars.
-
Creators push ‘human-made’ labels as AI content floods internet
-
AI with human traits may be safer, Anthropic study finds
-
Pavel Durov: Russia’s anti-VPN measures triggered payment failure
-
Meta pauses Mercor work after major data breach
-
Sam Altman's OpenAI buys TBPN to expand communication strategy and shape AI public debate
-
DeepSeek V4 model bets on Huawei chips as demand surges
-
Quantum computing threat: Why global cybersecurity could collapse soon
-
AI cyberattacks set to outpace human hackers, experts warn
