| J |
ournalists and newsrooms across the world, including those in Pakistan, are currently integrating AI into their work, whether by using it for research, preparing scripts and outlines, translating, fact-checking or copyediting. Concerns regarding AI’s efficacy and the risks it poses to freedom of expression notwithstanding, many newsrooms are seeing AI as a valuable resource. While there are conversations to be had regarding its impact on media and the information ecosystem as a whole, AI’s gendered impact on media, particularly on women journalists, is often overlooked.
As newsrooms rush to adopt AI, virtually no attention is given to how AI perpetuates gender stereotypes and biases. Newsrooms using generative AI to produce images to accompany their stories or to supplement desk research run the risk of perpetuating pre-existing biases and exclusions within media when it comes to women and marginalised communities. Generative AI, for all its hype, is severely limited by the datasets it draws on. As women and gender minorities are already excluded from existing research and data, AI is bound to reproduce these exclusions in the content it produces. Newsrooms that adopt AI, especially those which pride themselves as AI-first newsrooms, are at risk of producing news stories that perpetuate preexisting exclusions rather than overcoming them.
While much ink has been spilt about the existential threat generative AI poses to media, the particular and gendered impact on representations of women within media and women in journalism has not been given the same attention. Further, as more newsrooms adopt AI, it is unclear if women journalists, who are already a minority in the media, are being trained to adapt to these changes and whether existing staff are provided resources to understand gender biases within AI and how to tackle them. While AI integration into media might appear to impact everyone equally, any adoption that fails to understand the unique impact on women will have an adverse impact on the overall participation of women in media.
Further, as technology-facilitated gender-based violence (TFGBV) and its impact on women journalists are better understood, the role of generative AI to further target women journalists should give us caution. Activists and journalists have long warned about the misuse of AI to produce deepfakes and inauthentic content to target women. Last year, women journalists in Pakistan were targeted by doctored images created through generative artificial intelligence through a concerted campaign by supporters of a political party. The AI-generated images were used to create false narratives about those targeted and perpetuate patriarchal narratives about women in the public eye.
TFGBV has long been used as a tool to police the speech of women journalists online, often employed to punish journalists who challenge power or dominant narratives by weaponising technology and gender. AI will only enhance these campaigns, making it easier for perpetrators to launch gendered disinformation campaigns en masse. Research has shown that such TFGBV and AI-fuelled gendered disinformation campaigns have the effect of pushing women away from digital public spaces and have led many to self-censor online.
Technology-facilitated gender-based violence has long been used as a tool to police the speech of women journalists online, often employed to punish journalists who challenge power or dominant narratives.
Even when AI-produced content is not abusive, the proliferation of low-quality AI content—pejoratively referred to as “AI slop”—increasingly populating our timelines poses unique challenges to women journalists. Anyone who has spent some time on platforms such as TikTok or Twitter (now known as X) would have come across lazily-made AI videos, which more often than not present women as overly-sexualised. This content not only reinforces gender stereotypes, it also has a trickle-down effect on public-facing women online, such as women journalists, who are often on the frontlines of resisting these stereotypes.
As the internet is increasingly flooded by AI slop content representing women in stereotypical ways, it renders the work of activists diversifying women’s digital representation even more difficult, and increases the burden on women journalists who are the ones challenging these stereotypes. While the impact of this might not be as direct or overt as TFGBV, it contributes to less visible forms of sexism that women face online.
Another under-appreciated aspect of TFGBV is the ability of AI-driven algorithms to perpetuate and amplify harassment of women, particularly faced by those in the public eye, including journalists. Content targeting women or spreading gendered disinformation can often gain traction online, not simply because it organically appeals to a large audience, but because AI-powered social media feeds push content that they believe will generate the most engagement. Unfortunately, it is common to see content targeting women, especially well-known women such as journalists, being promoted by the algorithm to retain eyeballs and generate engagement. Even in cases where TFGBV content might not be AI-generated, AI acts as a microphone for it.
Algorithms controlling social media newsfeeds, and thus attention spans, are furthering gender harms in other ways as well. Research has found that algorithms on social media platforms such as TikTok often prioritise and promote content containing women who conform to traditional gender stereotypes, for instance, rewarding those with lighter skin tones. As digital content becomes increasingly integral to news, women journalists will find themselves being discriminated against by algorithms and AI-powered recommendation systems that conform to patriarchal parameters. Earlier, sexism and unconscious bias were embedded within newsrooms through the people who occupied them; now, these are coded into the design of platforms where journalists are forced to operate.
So where do women journalists and media professionals go from here? It is tempting to lean towards complete rejection of AI. There are important arguments for restricting its use—particularly given that generative AI is built on the back of the extraction of creative labour, exploitation of cheap labour in the Global South and deleterious impact on the environment—women in media must confront these challenges head-on. Understanding how AI works and how it is deployed, grounded in a critical approach to AI and its gendered impact, must be popularised among women journalists to equip women within newsrooms to push back. For instance, if editors want to use an AI-generated image of women in stories or videos that perpetuate gender stereotypes, women journalists should be able to resist these editorial decisions. For this to happen, we will need more than just greater awareness; it will require women journalists to organise and unionise around issues such as these.
While it is a tall ask, especially as most newsrooms now see AI as inevitable, it is also an opportunity. Women journalists are uniquely placed to lead the resistance against the unethical adoption of AI within the media, particularly when it undermines creativity and investigative journalism. The hope is that a gendered approach to AI within the media, one that is sensitive to the disproportionate impact on women, will have a knock-on effect that addresses wider challenges that AI poses to the future of journalism and freedom of expression.
The writer is a researcher and campaigner on human and digital rights issues.