In a major ruling on April 11, the Supreme Court called for comprehensive guidelines to govern the use of artificial intelligence (AI) within the country’s judicial framework. This could potentially be a major step forward in Pakistan’s legal development, paving the way for more timely and efficient legal proceedings by enhancing judicial capacity. Already, the Federal Judicial Academy has introduced ‘Judge-GPT’ to assist around 1,500 district judges with case research and drafting under a regulated framework and judges globally have begun using AI in legal research and drafting. Pakistani judges have also experimented with AI tools like ChatGPT for drafting decisions, though under strict human oversight. Delays and backlogs are among the most serious problems hindering the effective functioning of the legal system in Pakistan. According to some reports, the lower and superior judiciaries face a mammoth backlog of over 2 million cases. This not only wastes litigants’ time but results in many not getting any justice at all, with cases remaining unresolved long after those involved have given up hope of their problem being solved. In some cases, litigants pass away with their cases still pending. The case that prompted this ruling was reportedly a rent dispute case that had been lingering on for seven years.
This is no way for a modern judiciary to function and it is hoped that the embrace of AI tools in administrative functions like case allocation, research, drafts can speed up and streamline the legal process. However, despite its potential to help facilitate the judiciary, the apex court affirmed that AI should not become a substitute for human judges or be used in a manner that compromises constitutional fidelity and public trust in the legal system. In short, people should not show up to court to be judged by an algorithm, not least because AI algorithms, regardless of the depth and breadth of the data powering them, are still probabilistic tools that are prone to serious errors. Hence, it is quite reassuring to see that the SC judgment warns against ‘automation bias’ and the potential for AI to generate fabricated or incorrect information and stresses that such tools must never be viewed as conclusive or infallible. As per the UN University, AI models are trained on historical data and predict future outcomes based on past patterns. Predictions, no matter how extensive the data they are based on, are qualitatively different from sound legal judgement and are no basis for determining right and wrong.
That being said, AI tools can still serve as useful fact-finders that help expand legal knowledge. But, whatever output they do generate, has to be verified. Over reliance on past patterns also carries the risk of generating biased and/or discriminatory outputs, reinforcing existing inequalities. According to some experts, AI can also reflect the assumptions or preferences of the developers coding them. We will need to not only ensure that AI does not become a substitute for human judges and core legal principles but also come up with better and less biased AI tools and platforms suited for a judicial environment. For now, one can hope for a legal system that allows judges to get their work done faster.
Victims are too often betrayed by those they trust, while perpetrators are protected by silence, social standing or fear
Modi government’s unilateral suspension of Indus Waters Treaty marked yet another dangerous escalation
Sirbaz completed this monumental feat this past Sunday, summiting the 8,586 metres tall Mount Kanchenjunga in Nepal
People of Kashmir continue to suffer under unprecedented military lockdown and grave human rights violations
If party was never seriously considering such move, why were reports allowed to circulate without immediate...
Government has stated that Indian-backed militants were behind Khuzdar attack