Round 1, FIGHT: Never in a million years did I think that I would actually write on an unsaid war between humans and machines. But here we are. Of course, people have been talking about artificial intelligence (AI) for a long time now, but little has been said about this technology’s subtle entry in journalism, a field that directly affects me.
And as I type this, I have this unsettling fear: how will I prove, especially to new readers, that what follows is written by me – 100-per cent ‘human’ generated? What makes my writing different from that of a highly trained AI model, which perhaps went through the work of writers I have not even heard of (and, no, this is not a subliminal dig at the model’s capability of making up source files on its own).
Previously, this was not my main fear; my identity as a writer was intact. In a way, I was (am?) privileged. I have been writing for ages now, and most people who know me know my work as well. For instance, I know my editor can tell this has me written all over it (and if I make a few mistakes with article usage, she will even put a heavy bet on it). Similarly, my friend could tell that, back when I would write fiction regularly, I had this weird obsession with adding that a shiver ran down my character’s spine; more so, my cousin would always wonder why my character’s eyes would ‘shimmer with unshed tears’. This is similar to how the word ‘delve’ is a giveaway that an article is AI-generated (more on this discrimination later, in a separate article).
If these people were to mimic me, they could do so easily because they have gone through my ‘dataset’ (my work) quite a few times (thousands of iterations, if I may add). As someone who loves writing and does it for both passion and work obligation, I am kind of dissatisfied with the way AI models are being perceived by people, especially with their use in journalism. Second, there is this fear of the unknown that many working millennials have: am I going to be redundant?
There is a mix of hope and red alert. According to a recent report from the Reuters Institute for the Study of Journalism, even consumers are becoming more sceptical of AI integration in newsrooms and believe that, while AI makes news cheaper, it is far less trustworthy. First things first, a round of applause for those consumers because they get us. They understand that news production involves many steps and cannot possibly be mimicked by large language models (LLMs). Only a journalist can tell how it feels when your sources leave you on read as you struggle to file an urgent story as early as possible.
But all of this does not make a case against the use of AI. As a student of AI, I believe that LLMs are useful assistants. If instructions are given to them correctly, they can perform miracles – and that too in the shortest time possible. For example, a good model would take around four minutes to transcribe a 15-minute-long audio clip. I once tried to train my model to predict the value of ‘y’ in the ‘y=2x–1’ equation. I had a small dataset, and I ran the model through it over a thousand times, yet its ability to predict the value was not entirely accurate. So, when I put ‘x’ as 10 (and I know the answer should be 19), the model came up with 18.88889 – almost correct, but not entirely. This does not mean that the model’s efficiency could not be increased. But, yes, we can agree on limitations of the machine – like how an LLM told me that it could not copy the text and paste it on an image without making mistakes.
What such models are also incapable of doing is getting the proper context. They can easily miss out on story angles/ideas (especially those that are not fine-tuned on local data) and would perhaps not be as helpful as the newspaper’s laziest staff reporter. For better and engaging stories, we still need reporters to go out, observe, talk to people and report. Hour-long conferences require in-person/online attendance to help reporters pick up on cues. The AI model can convert a simple press release into a publish-worthy news report, but the ‘tarka’ (the flavour) is in the hands of reporters.
Can the reporter then use the model to write the story? In Pakistan’s context, where English is not our first language, I do not see any harm in using AI tools to enhance your work (we also rely on our phone’s predictive text to write correct spellings, don’t we?) What matters is the idea (what the news is about) and relevance of news reports. The how-it-is-produced part is secondary. Take it this way: if you are reading this online, does it matter if an actual person copied the text and pasted it? Would you be okay with me using an automated workflow where time was the trigger and my system was instructed to upload it to a pre-decided page at a pre-decided time? This is the production aspect that AI makes easier – it cannot entirely replace humans/journalists in this field, and we should not view it as such.
Earlier this year, I wrote a piece on the dying art of writing where I had argued that, while writing is an act of worship for most people, excessive gatekeeping will only kill good ideas. And it does not really matter how a great idea reaches us (whether through a ghost writer or an AI chatbot). I have not yet changed my opinion on the how-to-write part. The what-to-write part is unique for every writer/reporter and this is what makes them stand apart.
For now, as journalists, we can focus more on digging out stories instead of worrying about the models that struggle to run after minor version updates. And, the most important part, instead of dismissing it and looking at it as some kind of foe, we can collaborate to see how such tools can be effectively utilised.
The writer heads the Business Desk at The News. She tweets/posts @manie_sid and can be reached at: aimen_erum@hotmail.com