The AI dilemma
While less terrifying than killer robots, these effects pose risks to society’s stability
When we think of AI dangers, we often imagine dystopian scenarios like The Matrix or Terminator, where rogue machines rebel. But the real threat may lie not in physical destruction, but in how AI subtly reshapes our world.
AI on social media platforms may not be plotting humanity’s downfall, but it is altering how we consume and interpret information and make decisions. By personalising content in milliseconds, AI creates echo chambers, radicalises individuals, and amplifies propaganda, especially with influencers endorsing products and shaping consumers’ opinions. While less visually terrifying than killer robots, these effects pose significant risks to society’s stability and how it breeds instant consumerism.
AI algorithms are designed to give us what we want – tailoring news feeds, videos, ads, and posts based on our personal preferences, click baits, eyeballs, and engagement. But this hyper-personalisation has created information silos where individuals are fed a steady stream of content that reinforces their existing beliefs and biases. In the quest for engagement, social media platforms inadvertently foster environments where differing viewpoints are rarely encountered.
Propaganda machines have never had it easier. AI models can deliver misinformation at an unprecedented scale, using sophisticated algorithms to target specific groups and individuals. This capability has already been exploited to influence elections, incite violence, and steer public opinion towards certain products. And while the AI driving these systems may be inanimate and devoid of malevolent intent, their impact can, and has, led to dangerous consequences.
Case in point, YouTuber Ducky Bhai’s wife’s deepfakes that circulated and created havoc. This brought into question the lengths AI can go to. Speaking about deepfakes Asad Baig, founder of Media Matters for Democracy, a Pakistan-based not-for-profit that works on media literacy and development, said that they pose significant ethical risks, especially in cases of image sabotage, defamation, and privacy violations. “We need clear, rights-based frameworks to address this. Consent must be central. No deepfake should be created or shared without the person’s explicit permission.
“Platforms must take responsibility by developing tools to detect and label deepfakes, ensuring transparency and accountability. Legal penalties should be enforced for the malicious use of this technology, and governments need to collaborate internationally to set ethical standards that protect individuals while maintaining space for innovation.”
He further added that AI-generated content in consumerism poses a significant risk of spreading misinformation, as it can manipulate perceptions and reinforce biases about products and services. “This issue is exacerbated when platforms lack accountability for the content they permit. Tech companies must take responsibility by being transparent about AI-generated content, strengthening moderation systems, and addressing misleading information swiftly. However, caution is needed with laws aimed at curbing misinformation, as overly broad regulations could suppress free expression. A balanced, rights-based approach is essential to protect expression while preventing harm”, said Baig.
AI has fascinated storytellers long before becoming a tech reality. In 1942, Isaac Asimov's Three Laws of Robotics aimed to prevent AI from harming humans, yet fictional robots often found loopholes, leading to disaster, as seen in 2001: A Space Odyssey and Terminator. However, optimistic portrayals, like the AI doctor in Star Trek: Voyager, showcase AI as a helpful tool for humanity, aiding growth and well-being. These hopeful visions contrast with today's ad-driven AI models in social media, where the focus isn't benevolence but selling, making ethical considerations more urgent.
In real life, AI is not guided by utopian principles. Instead, it’s largely driven by profit motives, with algorithms designed to maximize engagement, often at the expense of truth and integrity. “Tech companies bear significant responsibility for the amplification of radical ideologies and hate speech, especially when their AI-driven algorithms prioritize engagement over safety. These platforms, through their design, often reward sensational and polarizing content because it drives more clicks and interaction.
“This creates a dangerous cycle where harmful ideologies can thrive unchecked, leading to real-world consequences. They should be required to audit and address algorithmic biases, invest in better content moderation, and be transparent about how their platforms elevate certain types of content. The use of AI tools needs to be more tightly regulated, with clear guidelines on how to prevent the spread of hate speech and radical content.”
He also said that governments, while they need to push for better regulation and accountability for tech companies, must also be cautious not to overreach with laws that could curtail free speech. A rights-based approach that emphasizes protecting both freedom of expression and preventing harm is essential.
AI is already reshaping our minds, societies, and purchase decisions through everyday platforms. The unchecked spread of AI in social media distorts reality. So, the real question is not whether AI is dangerous, but how to mitigate the dangers already present.
The writer is an independent journalist from Karachi.
-
Mississippi Postal Worker Arrested After Complaints Of Marijuana Odour In Letters -
Canada, China Lock Initial Trade Deal On ‘EV,Canola’ To Strengthen Ties: What To Expect Next? -
Melissa Leo On Euphoria Of Winning An Oscar Vs It's Impact On Career -
Meghan Markle, Prince Harry Express 'hope' In Latest Major Statement -
Sophie Turner Backs Archie Madekwe As BAFTA Announces Nominees -
Jason Momoa Cherishes Hosting Ozzy Osbourne's Final Gig Ahead Of His Death -
Real Reason Timothee Chalamet Thanked Kylie Jenner At Awards Revealed -
Will King Charles Attend Funeral Of Prince Philip's First Cousin, Princess Irene? -
'Furious' Prince William Wants Andrew As Far Away As Possible -
Blood Moon: When And Where To Watch In 2026 -
Apple Foldable IPhone Tipped For 2026 Launch With A20 Pro Chip And C2 Modem -
Meghan Lends Credence To Reports Of Rift With Kim Kardashian On Chicago's Birthday -
Florida Woman’s Alleged Bid To Bribe Police Ends In Unexpected Discovery -
James Van Der Beek Strongly Opposes The Idea Of New Year In Winter -
Elon Musk’s Starlink Rival Eutelsat Partners With MaiaSpace For Satellite Launches -
Fans Feel For Leonardo DiCaprio As He Gets Awkwardly Snubbed: Watch