close
Monday August 11, 2025

How to handle AI

Students, professionals, and content creators are increasingly leveraging AI to enhance productivity and generate content at unprecedented speed

By Abdul Rafay Siddiqui
July 17, 2025
This representational picture shows a metallic figure against a computer. — AFP/File
This representational picture shows a metallic figure against a computer. — AFP/File

Artificial intelligence (AI) generative tools have profoundly transformed our daily lives. AI, which only a few years ago seemed confined to the realm of science fiction, is now deeply embedded in how we work, communicate and create.

Students, professionals, and content creators are increasingly leveraging AI to enhance productivity and generate content at unprecedented speeds. Today, short-form videos and reels on social media are routinely created with the help of AI tools. Some are so realistic that the boundary between authentic and synthetic content is becoming nearly indistinguishable. Indeed, this line becomes thinner by the day.

This rapid proliferation has given rise to significant legal and ethical questions, particularly concerning intellectual property rights and the right to privacy. Two critical issues emerge. First, to what extent does existing intellectual property law in Pakistan regulate AI-generated content? Second, does such content infringe upon an individual’s right to privacy and dignity when their images or likenesses are used without consent?

In Pakistan, copyright protection is governed by the Copyright Ordinance, 1962. In this regard, Section 13 provides that the author of a work is the first owner of the copyright. The law defines an ‘author’ in Section 2(d). For instance, the author of a musical work is its composer, and the author of a photograph is the person taking the photograph. Accordingly, such a work can be registered with the Intellectual Property Organization of Pakistan as intellectual property. However, this framework was designed for human authorship and does not clearly address works generated autonomously by machines.

This raises an important question: can content created using AI tools be registered as the copyright of the user who merely prompts the AI? The issue is far from straightforward. A central difficulty is that AI systems are trained on enormous datasets comprising content sourced, often without explicit authorisation, from across the internet. There is a high likelihood that AI-generated outputs incorporate elements derived, directly or indirectly, from pre-existing copyrighted works.

Under prevailing copyright principles and internationally accepted norms, the use of a registered work by another person generally requires authorisation through a licence. A licence not only preserves the author’s intellectual property but also establishes clear terms and conditions, such as payment, duration of the license, permitted uses and confidentiality obligations. When an AI platform generates new content without securing such permissions, it may effectively appropriate copyrights without compensating the original author. This practice can significantly undermine the rights of authors and jeopardise their long-term livelihoods.

A notable example of the controversy surrounding AI-generated works arose when ChatGPT and other visual image generators began producing content mimicking the distinctive artistic style of Studio Ghibli, the celebrated Japanese animation studio. Artists and legal experts criticised this practice, arguing that it unfairly appropriated the creative identity and goodwill associated with Studio Ghibli’s work. Studio Ghibli was also not earning any revenue from this AI-generated content. This is just one example of the many which show that AI models do not create in a vacuum. Their outputs are reflections and amalgamations of pre-existing human creativity, which have been expressed through works.

Privacy rights are also at risk. For instance, AI-generated content frequently incorporates the likenesses of public figures such as politicians, celebrities and even ordinary individuals whose images are publicly available online. Such content may infringe not only on their privacy but also their dignity (Article 14 of the constitution of Pakistan), especially when the generated material depicts them in false, misleading or compromising situations.

Deepfake technology, which can create highly realistic synthetic videos, is a striking example of this risk. In many cases, tracing the source of the AI-generated content can be an almost impossible task for victims. They may need to involve law enforcement agencies (LEAs). Unfortunately, the LEAs in Pakistan are not well-equipped and structured to deal with AI-related violations.

While there is no standalone comprehensive data protection law in force in Pakistan, the Prevention of Electronic Crimes Act, 2016, contains provisions that prohibit the unauthorised use of personal data and images, especially where such use is intended to harm, defame or intimidate. However, these provisions were not drafted with AI in mind and may prove inadequate to address the scale and sophistication of modern synthetic content.

In this regard, the establishment of the National Task Force on AI in April 2023 is a good step. The key objective of this Task Force is to develop a 10-year roadmap for the adoption of AI in the business, development, governance, education and healthcare sectors in Pakistan. Moreover, a draft National AI Policy has been prepared by the Ministry of Information Technology & Telecommunication. However, there is still much to be done regarding legislation and the monitoring of AI use in Pakistan.

AI is not simply a technological revolution; it is also a legal and ethical frontier that challenges conventional understandings of authorship, ownership, privacy, and dignity. In Pakistan and globally too, lawmakers, courts and regulators are confronted with an urgent task. They need to reconcile AI’s transformative potential with the fundamental rights of individuals and the legitimate interests of creators.

As AI tools become ever more capable, stakeholders ranging from policymakers to platform operators and users themselves must engage in thoughtful dialogue to ensure that innovation does not come at the expense of fairness, accountability and respect for human agency.

The existing legal frameworks in Pakistan were not designed to address situations where machines can autonomously create content. Which is why there is a need for lawmakers to legislate on AI and define how it intersects with areas such as intellectual property and privacy.

The writer is a lawyer based in Islamabad.