Tue October 16, 2018
Advertisement
Can't connect right now! retry

add The News to homescreen

tap to bring up your browser menu and select 'Add to homescreen' to pin the The News web app

Got it!

add The News to homescreen

tap to bring up your browser menu and select 'Add to homescreen' to pin the The News web app

Got it!
Must Read

Opinion

October 14, 2018

Share

Advertisement

AI: the silent observer

Think of a young man, let’s call him Jack. Jack has just hired a personal assistant, Bob. Bob’s job is to record Jack’s every movement. If Jack goes to buy groceries, Bob dutifully pulls out pen and paper and jots down the time of day, the location of the store, any stopovers along the way or any conversations Jack might have had and all the people he spoke with.

In essence, Bob will record every instance, however trivial, of Jack’s life. Now imagine a point has arrived when Bob has been doing this job for two years straight and knows enough about Jack’s life to give him personal recommendations and life advice. So, if Jack decides to watch a movie, Bob intervenes with 10 other suggestions that might be a better option considering Jack’s recent movie preferences.

Better still, one day Jack comes home from a hard day at work and decides to have food delivered to his house for a change. Bob springs to action, throwing food options on the table, one after the other. He throws in other facts too – outlet ratings, customer reviews, best-case scenarios and worst-case scenarios – for good measure. Jack scratches his head. He really just wanted plain pizza, but Bob has suddenly added all these extra options to the mix, each rivalling the other in terms of cost, palate and time of delivery. And just when Jack is finally about to make what is, after all, a simple decision, Bob comes running back with a bigger list of ‘other things to consider’ in Jack’s price range. If you’re Jack, and you’re hungry, now would be the time you’d want to strangle Bob to oblivion.

But imagine if Bob were invisible, an entity which resides in the murkier streams of bits and bytes, of artificially-intelligent algorithms, you would be at pains to notice the effect he has on you and your life choices. Yet, this is the living reality for most of us now, as AI-powered algorithms observe every detail of our digital lives. And the danger is that most of us fail to internalise just how pernicious this current state of affairs is, and how much worse it can get in short order. Part of this is due to a failure of understanding how our brains work. In our daily lives, we have this illusion of cognitive control or free will – that we control every thought arising in our consciousness and enjoy full autonomy over our actions.

In reality, most of what we experience in waking consciousness is a narrative our mind constructs, based on pattern recognition and inference models running in our brains. This is not just conjecture; this is proven science. And this explains how most of us could look at the same thing and perceive it differently. Even objects in space don’t entirely appear the same to people. This becomes more pronounced as layers of abstractions are introduced to the world around us.

This fact alone, that our brains construct reality, renders all subjective experience, or reality as we perceive it, amenable to outside interference. Put simply, our brains are hackable. This is nothing new. Through the ages, our consciousness has been hacked by glib charlatans, demagogues, and sophists. In fact a whole industry – advertising – is built on our mind’s suggestibility and its subsequent manipulation. But what’s different today is the depth and scale of access various agents – ad companies, politicians, media personalities and activists – enjoy over our thinking and imagination.

Every click we make today on our smartphones or tablets is feeding some AI engine working in the background. It is almost like having a 24/7 therapist, whose always by your side, and whose notes capture your deepest darkest thoughts. Now imagine that this therapist regularly releases intimate knowledge about you to other therapists, researchers, data scientists, brain scientists, clinicians, and whole organisations, who will use this information in all kinds of ways – an ad agency can now sponsor products to you, a lawyer could contact you for legal aid, and a research scientist could use your problems as a data point in his research thesis.

After a few days, brochures tailored conspicuously to your own life events start arriving in your mail box. You legitimately get upset over this breach of trust, but soon realise that your therapist, or this network of therapists, is not a person or people, but some faceless app on your smartphone. Doctor-patient confidentiality goes out the window. While this sounds bizarre, some flavour of this problem is what we’re already contending with today: in the attention economy of social media, we are the raw material for profit-makers everywhere. It is as though invisible probes have been planted into our frontal cortices and we’re all unsuspecting test subjects, obliging our invisible puppeteers every step of the way.

And it is easy to underestimate the impact of all this because our minds rarely track consequences beyond the concern for one’s individual self or at most, one’s nuclear family. Second- or third-order effects are rarely ever captured on any conceptual level, until it’s already too late.

Take Trump’s ascension to the most powerful office on earth. Who can say with a straight face that social media echo chambers, exploiting white rage in America, and campaign bots (quite possibly Russian), curating and disbursing news streams for fence-sitters to vote for Trump, had nothing to do with Trump’s election? And then to imagine the impact of his election on the global order. We can’t fully entertain this thought without cringing with horror.

Or, think about the knock-on effects of excessive social media use on one’s psychology or self-esteem. Phenomena such as social comparison, the constant fear of missing out, instant gratification and constant distraction – this is not the perfect recipe for a healthy, stable mind. If anything, we are creating human automatons browsing endlessly ‘online’ virtual spaces for that next hit of dopamine to get them through the next few minutes. Ask yourselves: is this the reality you want your children, and your children’s children to grow up in?

The bottom line is not that the world of social media is necessarily evil and must, therefore, be abandoned at all costs. This can’t happen, even if that were the case. Technology will continue to advance, and we will continue to have more, not less, tech innovations at our disposal.

The answer is the regulated use and delivery of technology, such that it matches the capacity of its users to use it responsibly. While this will not be easy, it is the only way forward that guarantees a careful stride forward into a future that is becoming increasingly uncertain and, dare I say, dangerous.

The writer is a freelance contributor.

Advertisement

Comments

Advertisement
Advertisement

Topstory

Opinion

Newspost

Editorial

National

World

Sports

Business

Karachi

Lahore

Islamabad

Peshawar