The price of on-line validation

Shamama Waqar
November 16,2025

People need to start questioning what they share, where it goes and why they’re doing it

Share Next Story


omewhere between taking selfies and trusting filters that promise to fix our faces, we seem to have reached a collective decision that artificial intelligence deserves everything we own: our photos, our data, perhaps even our dignity. We present it on a silver platter.

We surrender it with a smile. Who wants to be left out of the latest trend? If the internet were a dinner party, artificial intelligence would be the guest eating for free just as we bring our privacy as the appetizer.

The latest obsession making rounds online shows just that people are now generating hyper-realistic AI images of themselves with celebrities through platforms like Google Gemini. A digital hug here, a fake selfie there, and voilà there is a moment of fame, without ever meeting the celebrity in person. It’s harmless, right? After all, it’s just a picture; except, that it isn’t. In the rush to avoid FOMO, we’ve traded caution for convenience and truth for digital validation.

Beneath this playful illusion lies a darker reality. According to Pew Research Center (2023), more than 55 percent of Americans worry about the spread of altered images and videos online, warning that public trust in visual evidence is crumbling. If we can no longer believe what we see, the problem goes far beyond celebrity edits; it threatens how societies understand truth.

For Pakistan, this is more than a philosophical dilemma; it’s a cultural and moral crisis in the making. Today, it’s AI-generated photos with movie stars and cricketers. Tomorrow, it could be fabricated images of ordinary women without consent, without control and without consequences. In a society where honour and reputation remain deeply gendered, this could fuel harassment and blackmail, even violence.

The Digital Rights Foundation (2022) has warned that non-consensual image-based abuse is one of the fastest-growing forms of online harassment in Pakistan. AI now makes this easier, faster and more convincing.

And yet, we keep clicking “generate.” Why? Because it’s fun; because everyone is doing it; and because the internet tells us to. The normalisation of this harmless fakery not only blurs ethical lines but desensitises us to what consent actually means. If we accept fake intimacy with public figures, it’s only a matter of time before this behavior spills over into private spaces where the stakes are much higher.

AI is not inherently evil. It mirrors our choices. If we keep serving our faces, emotions, and privacy to it on a silver platter, we shouldn’t be surprised when it starts knowing us better than we know ourselves.

Beyond ethics, the psychological cost is becoming impossible to ignore. Pakistan’s youth, already burdened with unemployment, inflation and social instability, are now caught in a digital storm where reality and illusion constantly collide.

The pursuit of online validation through filters, edits and now AI-generated fantasies deepens insecurity rather than confidence. A recent research published in the Journal of Mental Health Horizons revealed that excessive social media use among Pakistani university students was strongly linked to anxiety, depressive symptoms and body image distress. When self-worth becomes tied to synthetic perfection, the human mind suffers quietly but deeply.

Social media already amplifies this pressure, rewarding appearances over authenticity. The AI selfie trend simply takes it one step further manufacturing an entirely false version of success, friendship and fame.

The result is a whole generation chasing digital ghosts while feeling increasingly disconnected from themselves. The emotional cost of this race for relevance, anxiety, isolation and burnout is being paid silently by millions of young Pakistanis.

So, what can we do before this silver platter turns into a ticking time bomb?

The first step is awareness. People need to start questioning what they share, where it goes and why they’re doing it. Schools and universities must teach young people about AI ethics and data privacy just as they do about social responsibility. Next, companies and policymakers must step up.

Platforms should clearly label AI-generated images and handle user data with transparency, while lawmakers update cybercrime policies to address synthetic abuse before it becomes mainstream.

Perhaps most importantly, we must reclaim control as individuals. Think twice before feeding your identity into a machine for a few likes. Not every trend is worth the trade-off, and not every digital thrill is worth your peace of mind.

AI is not inherently evil. It mirrors our choices. If we keep serving our faces, emotions and privacy to it on a silver platter, we shouldn’t be surprised when it starts knowing us better than we know ourselves. The question isn’t whether the AI will reshape our world, it’s whether we’ll still recognise ourselves in its reflection.

So, the next time you hand your face to a machine for a taste of digital fame, ask yourself: is this creativity or complicity? Sometimes, serving yourself on a silver platter doesn’t make you the guest, it makes you the meal.


The writer is a development practitioner working at IBA-CICT


Advertisement

More From Political Economy