Technology

Viral AI caricature trend: Is your personal data really safe?

ChatGPT-generated caricature trend sweeps social media

February 07, 2026
Viral AI caricature trend: Is your personal data really safe?
Viral AI caricature trend: Is your personal data really safe? 

The viral AI-generated caricature trend has taken the internet and social media platform by storm as people are using generative chatbots, like ChatGPT to generate the eye-catching caricatures of their photos.

Users upload a selfie or photo and a short bio on the chatbot and they receive satirical portraits, exaggerating their signature features and poking at their physical features and including details about their personality, hobbies and jobs.

Apparently, the viral ChatGPT-generated caricature trend seems harmless and amusing. In truth, the trend is not as harmless as users think.

Cybersecurity experts have warned that feeding the chatbot with image and personal information could pose privacy risks. When one uploads a picture on an AI bot, they give the bot more than a picture.

According to David Grover, senior director for cyber initiatives, the bots are going to save the information the one uploads and it will eventually go to a storage box. And no one knows what the companies will do with one’s personal data.

“The more users upload personal data in the digital world, the more it is going to be difficult to protect,” Grover said.

Shuya Feng, UAB cybersecurity researcher and assistant professor said, “But there are some things you don’t want the model to learn. For example, you upload your image and your bio features are literally there, right? So the color of your eyes and your hair color and these kinds of bioinformations. That can be also learned by this model.”

Such kind of information could be exploited for accessing bank accounts or medical records in the case of AI data breach.

Worst, the data could be used for AI-enabled scams and deepfakes.

The only solution lies in avoiding uploading personal data on such chatbots.

“The quick suggestion is that you don’t share the information with the model. So even if you use the AI services, there is an option that I don’t want to share my data with the model, with the model training,” Feng suggested.

Similarly, Grover urged for educating people about the pitfalls of AI-powered data breaches.

In the rapidly evolving landscape of artificial intelligence, privacy and data security have become more susceptible to nefarious designs.