close
Tuesday August 05, 2025

When code replaces conscience

By Danish Ahmed Soomro
July 16, 2025

A representational image of scales of justice and a gavel. — Unsplash/File
A representational image of scales of justice and a gavel. — Unsplash/File

In the summer of 2013, Eric Loomis walked into a Wisconsin courtroom expecting to hear a judge’s reasoning, not the verdict of a secret algorithm. But that’s exactly what determined his fate.

The software, known as COMPAS, had labelled him a high risk to reoffend. Neither he nor his lawyer could access the data or logic behind the label. The judge relied on the software and sentenced Loomis to six years in prison.

Years later, Loomis’s case continues to echo across borders. From Pakistan’s cautious AI reforms to global ethical debates, his story has become a symbol – a reminder that algorithms can obscure justice rather than serve it. As global courts embrace artificial intelligence to speed up decisions, Loomis remains a cautionary tale of what’s lost when human judgment is outsourced to code.

Loomis became one of the earliest faces of a rising global dilemma: algorithmic injustice, where technology doesn’t just assist in justice, but distorts it.

Global risks, real people: The COMPAS scandal wasn’t a glitch. Investigative reporting later revealed the tool flagged Black defendants as high-risk nearly twice as often as white defendants. Loomis’s inability to challenge the algorithm became a global metaphor for how hidden systems quietly reshape justice.

In Brazil, a judge from Sao Paulo faced public scrutiny in 2025 after admitting to using ChatGPT to assist in drafting a court ruling, which included fabricated legal citations. The National Council of Justice (CNJ) opened an investigation into the matter, focusing on concerns about the accuracy and reliability of AI-generated content in judicial decisions. The incident sparked ethical scrutiny and triggered broader debate over judicial reliance on AI-generated content in Brazil.

These cases share a common thread: when courts rely on opaque AI systems without meaningful oversight, they risk replacing reasoned judgment with unaccountable automation.

Who is accountable? Legal systems were never meant to delegate judicial discretion to algorithms. The moment we allow code to determine outcomes that affect liberty, we undermine the foundation of justice: human reasoning, empathy, and accountability.

In 2025, Pakistan’s Supreme Court in CPLA No 1010-L/2022 issued a landmark judgement affirming that AI can assist judicial functions but must not replace them. It cautioned against ‘automation bias’, the tendency to blindly defer to machine decisions, and called for national guidelines on the ethical use of AI in law.

That same year, Justices Muhammad Junaid Ghaffar and Jawad Akbar Sarwana, of the Sindh High Court in 2025 PTD 143, exposed how multiple benches of the Appellate Tribunal Inland Revenue had issued copy-paste judgments with no factual differentiation. Though not AI-generated, the judgment warned of an emerging culture of judicial repetition that risks ‘machine-like detachment’, a cautionary tale about losing judicial conscience to bureaucratic mimicry.

Globally, the EU’s Artificial Intelligence Act classifies judicial AI systems as ‘high-risk’, requiring transparency, human oversight and data governance protocols. Unesco and the OECD have echoed similar warnings, advocating for explainability and fairness as minimum ethical standards.

When AI gets it wrong: In Ontario, Canada, lawyer Jisuh Lee faced judicial scrutiny in 2025 after submitting a legal brief in the case of Ko v Li that contained fabricated case citations produced by ChatGPT. The inaccuracies were uncovered in open court when the presiding judge could not locate the referenced precedents.

Ms Lee later admitted that the factum had been drafted in part by her staff using generative AI, and she had failed to verify the authorities before filing. Although contempt proceedings were ultimately withdrawn following her full admission, public apology and commitment to legal ethics training, the case underscored a growing concern: that lawyers relying on AI tools without verification risk misleading the courts and compromising justice. Ko v Li has since become a cautionary tale of professional responsibility in the age of machine-assisted advocacy.

Pakistan, too, has entered the arena of AI experimentation with caution and context. A landmark initiative by EnablifyAI and the Legal Aid Society – the country’s first Generative AI Masterclass for legal professionals – was conducted under the guidance of Barrister Haya Zahid, CEO of Legal Aid Society, and facilitated by Muhammad Shahzar Ilahi of EnablifyAI. This hands-on training empowered lawyers with practical tools in prompt engineering, contract redlining, brief writing and judgment summarising, using AI strictly as an assistant.

Among the tools introduced was the Danish Legal Assistant, a locally developed AI-powered drafting support system designed strictly for non-adjudicative functions. It was field-tested during a mock examination for aspiring judges hosted by the High Court Bar Association, Hyderabad. The system facilitated registration, attendance and result compilation, but every stage remained under human oversight.

What’s at stake: Algorithmic injustice is not just a technical flaw but a democratic one. When litigants cannot understand or challenge how a decision is made, their fundamental rights are at risk. Public trust in courts depends on transparency and accountability. Governments must require open disclosure when AI tools are used. Regular audits should test these systems for bias. Judicial officers need training not just in how to use AI, but when not to use it.

AI models developed within Western legal frameworks may misinterpret the procedural nuances of Global South jurisdictions. Locally adapted systems show promise but must be evaluated rigorously to ensure equity and reliability.

The verdict: From Wisconsin to Karachi, Ontario to Sao Paulo, the lesson is clear: if we do not confront the hidden biases in our black-box systems, the next generation may inherit courts where fairness is engineered but not guaranteed.

Algorithmic tools are not neutral. They are shaped by the data and biases we feed them. Left unchecked, they will reinforce inequality. Courts may become faster, but not fairer.

Can we trust justice in the hands of algorithms we don’t fully understand? Justice must remain human. The gavel belongs to the human conscience, not computational code.


The writer is an advocate of the high court.