The silent witness

By Danish Ahmed Soomro
|
August 27, 2025
Words reading "Artificial intelligence AI", miniature of robot and toy hand are pictured in this illustration taken on December 14, 2023. — Reuters

In 2023, engineers at Samsung inadvertently uploaded confidential source code to ChatGPT. The aim was simple: debugging assistance. The outcome was catastrophic. Fragments of sensitive corporate code later appeared in responses to unrelated user queries.

This was not a hack. It was a failure of foresight. That same year, US federal courts began requiring lawyers to disclose AI usage in legal filings following incidents where generative AI had fabricated legal citations. These episodes have become case studies in digital due diligence.

Artificial intelligence, particularly generative AI like ChatGPT, is entering legal and corporate sectors with unprecedented speed. What is missing, however, is the regulatory scaffolding to guard professional ethics and client trust. Across courtrooms, law firms, compliance units and advisory desks, AI is becoming the silent witness to confidential information, one that neither forgets nor asks for consent. This is already unfolding in Pakistan.

In 2025, UpGuard reported a serious leak involving Workcycle Technologies, which deployed a misconfigured AI model for TPL Insurance and the Federal Board of Revenue. The breach exposed the data of 97,000 insurance clients, including nearly 1,000 politically exposed persons. The leak was not criminal sabotage; it was the outcome of unsecured AI deployment without proper human oversight. In an environment where laws trail technology, client confidentiality is the first casualty.

Lawyer-client confidentiality is the bedrock of our legal system. Article 9 of the Qanun-e-Shahadat Order, 1984, prohibits disclosure of privileged communication. The Supreme Court reaffirmed in PLD 2024 SC 337 that any compromise of this privilege infringes on Article 10A of the constitution, which guarantees the right to a fair trial.

Yet, generative AI upends this trust structure. Tools operating on third-party servers, such as OpenAI's ChatGPT or Google's Gemini, may retain prompts and user content for purposes like model training and system improvement. For instance, OpenAI confirms it may use conversation data to refine its models unless users opt out, and Google’s Gemini retains user interactions for internal review, even when personal or legal content is shared. Or debugging, even when users believe their sessions are private. The US National Institute of Standards and Technology (NIST) warns that the deletion of user data from AI models is rarely absolute due to the persistence of learned parameters and distributed memory.

In line with these concerns, the Supreme Court in CP1010-L/2022 reiterated that the right to a fair trial under Article 10A is not merely procedural but extends to ensuring all inputs to a judicial process are transparent, verifiable and reviewable. While the case did not address AI directly, its emphasis on evidentiary integrity and procedural justice provides a compelling lens through which to assess the use of opaque or unverifiable AI tools in legal decision-making. Any AI system that affects client rights or judicial determinations without clear auditability could potentially fall afoul of this constitutional guarantee.

Is inputting confidential case data into a chatbot tantamount to disclosing it to a third party? If so, lawyers may be in breach of their ethical duties the moment they type into a free AI tool.

Globally, judicial systems are moving toward precaution. In State v Loomis (Wisconsin, 2016), an American court permitted the use of risk-assessment software in sentencing but warned against blind trust in algorithmic tools. In Canada, lawyer Samantha Ko faced disciplinary action for submitting AI-generated briefs filled with fictional citations. In Ayinde v Haringey LBC (UK, 2025), the King's Bench Division issued specific guidance for lawyers on AI usage in court submissions. Meanwhile, the American Bar Association's Formal Opinion 512 (2024) emphasises informed client consent and technological competence when using generative AI.

In contrast, Pakistan lacks institutional clarity. Neither the Pakistan Bar Council nor provincial bar associations have issued any ethical guidelines or position papers on AI. The long-stalled Personal Data Protection Bill, pending since 2023, remains in legislative limbo. Without statutory safeguards, lawyers face ambiguous boundaries between diligence and malpractice.

The Sindh High Court, however, has expressed judicial unease. In 2025 PTD 143 (Rakesh Keshwani v CIR), it flagged suspicions over identical rulings issued by two tax benches, hinting at possible automated drafting. The court made clear that even the appearance of AI-generated judgments could undermine trust in judicial integrity. The ruling echoed broader concerns that algorithmic convenience must never eclipse human reasoning, especially in matters of justice.

One notable exception in Pakistan is the Legal Aid Society, which in 2025 adopted a comprehensive internal AI governance policy, the first of its kind in the legal sector. The policy mandates human oversight of AI outputs, requires explicit client consent and enforces data minimisation. While not legally binding, the LAS framework sets a critical precedent for how legal institutions might adopt AI responsibly.

This stands in stark contrast to the regulatory silence elsewhere. The Securities and Exchange Commission of Pakistan (SECP), despite promoting digital transformation, has issued no AI-specific compliance mandates for legal or advisory firms.

Internationally, AI ethics and confidentiality are being codified. The European Union's General Data Protection Regulation (GDPR) treats client data as sacrosanct. Article 22 prohibits fully automated decisions without meaningful human review. The EU AI Act classifies AI tools used in legal and judicial functions as "high-risk”, requiring impact assessments, transparency and auditability.

Suppose Pakistan aspires to align with global norms. In that case, it must adopt similar safeguards, not just to protect citizens but to ensure our legal system retains public confidence in a digital age. Pakistan can no longer afford to treat AI in legal practice as a grey zone. The risks are too high, and the international consensus too clear.

Some reforms are now essential. These include: mandatory AI Ethics Guidelines from the Pakistan Bar Council covering confidentiality, informed consent and human oversight. Second, passage of the Personal Data Protection Bill with provisions addressing AI usage by legal professionals.

Third, Judicial Rules of Procedure requiring disclosure of AI assistance in legal filings. Fourth, Law Firm Compliance Mandates enforcing data minimisation, on-premise deployments and audit trails. Fifth, Continuing Legal Education modules on AI risks and responsible usage.

AI is already embedded in our legal workflows. The challenge is not adoption, but accountability. The question now is not whether we use it, but whether we regulate it in time to preserve trust in our legal system. It already sits in our offices, our courtrooms and our inboxes.

Let us not allow a machine to become the custodian of our clients' secrets. Let us ensure that the silent witness remains silent. Just as Samsung’s engineers never imagined their confidential code would resurface in a stranger’s query, Pakistan’s legal fraternity cannot afford similar complacency.


The writer is an advocate of the high court.