How AI is making internet sound same, study reveals
AI is changing how humans write online. Study shows writing diversity plummeted after ChatGPT
The internet is starting to sound unnaturally polished. LinkedIn posts read like corporate templates. Instagram captions feel overly structured. Casual emails carry strangely formal phrasing and excessive em dashes.
This is not just because there is an abundance of machine talk saturating the internet; even humans are starting to learn how to write like machines.
A study by the University of Southern California (USC) revealed that there was a sudden decrease in stylistic diversity in writings from science journals, regional newspapers, and social media after the introduction of ChatGPT.
In addition, researchers at the Max Planck Institute for Human Development found that the usage of words such as "delve," "boast," "meticulous," "comprehend" increased significantly in speech after examining over 740,000 hours of data analysis.
Are humans adopting AI tone?
Large language models optimise for clarity, structure, politeness, and safety. The result is grammatically polished but emotionally flattened text. Sentences transition smoothly. Arguments organise neatly.
Tone remains measured and diplomatic. Over time, humans consuming this polished style subconsciously begin imitating it, a phenomenon linguists call the exposure effect.
According to Professor Morteza Dehghani of USC, people start using language in its ideal form, which they think sounds influential or authoritative. With time, they start embedding their writing styles into applications such as Gmail, office work-related apps, and even social media posts.
These styles eventually seep into the digital space that people use regularly, without ever having used ChatGPT once in their lives.
Nowadays, individuals actively incorporate typos, lowercase texts, and poor grammar into their content to prove their humanness. There are people who go through their own content on AI detection sites to check if it has human-like errors in it. If not, then they try to make alterations themselves.
-
Anthropic, Gates Foundation collaborates to expand AI partnership in health education sector
-
OpenAI reviews antitrust action against Apple; Claims report
-
Anthropic overtakes OpenAI in business AI adoption
-
Tencent, Alibaba turn to local AI chips as Nvidia uncertainty grows
-
Microsoft faces UK antitrust probe over business software practices
-
Google unveils Googlebook: Here’s everything you need to know
-
Halupedia explained: Why AI Wikipedia clone is raising red flags
-
Who shapes AI’s answers? Ex-Meta news chief raises concerns
