In a rare moment of national alignment around a long-term vision, Pakistan’s federal cabinet has approved its first National Artificial Intelligence Policy. This comprehensive and forward-looking blueprint aspires to position the country as a digitally sovereign state in the global AI ecosystem.
With the global AI market expected to contribute $15.7 trillion to GDP by 2030 and reshape industries and societies alike, the timing couldn’t be more consequential. For Pakistan, a country with a youthful population, digital potential and development gaps, artificial intelligence isn’t just a technological imperative. It’s an economic, governance and societal necessity.
The policy’s ambition is impressive. It forecasts that AI adoption could increase Pakistan’s GDP by 7–12 per cent and create 3.5 million jobs by 2030. Its structured approach revolves around six policy pillars: innovation ecosystems, human capital readiness, AI security, sectoral transformation, infrastructure development and international collaboration. Each pillar is backed by dedicated funds, capacity-building initiatives, regulatory frameworks and outcome-based targets.
From the outset, the strategy is bold in vision and broad in scope. A National AI Fund (NAIF) will finance research and commercialisation, with 30 per cent of Ignite’s R&D funds allocated to AI development. Centres of Excellence in Artificial Intelligence (CoE-AI) will be set up across seven cities to drive localised innovation and industry-academia collaboration. A national skilling program will train 200,000 individuals annually, provide 3,000 scholarships and ensure inclusivity for women and persons with disabilities. By 2027, the government intends to achieve end-to-end AI deployment across its own ministries and departments.
And yet, the boldness of the document is matched by the complexity of its execution, and therein lies the policy’s biggest challenge.
One of the most pressing concerns is the fragmented institutional ecosystem that typically hinders long-term national plans in Pakistan. The policy outlines a multi-stakeholder approach but lacks the legal and operational clarity to define how ministries, regulators, provinces, academia and the private sector will coordinate at scale. The creation of an independent AI Authority, backed by enabling legislation, could provide much-needed continuity, accountability and cross-sector alignment.
Similarly, while funding plans exist, the financing model is underdeveloped. Relying solely on Ignite’s allocations will be insufficient to build competitive AI capacity, primarily when global rivals are investing billions in compute infrastructure, talent, and sovereign AI models. A blended financing strategy, combining public allocations, venture capital, diaspora-backed instruments and development finance, should be established immediately. Tax incentives for AI-focused startups and adoption credits for industries can catalyse broader private-sector participation.
Perhaps the most critical and underexplored dimension is data governance. Pakistan currently lacks a national data protection law, and its fragmented data architecture makes building large-scale AI models domestically both complex and risky. The policy gestures toward creating centralised repositories and localised large language models (LLMs), but without strict data localisation, privacy protections and clear data-sharing protocols, Pakistan will remain dependent on foreign platforms. A Pakistan Data Governance Act, aligned with global standards like the EU’s GDPR, is overdue and essential.
To its credit, the policy makes space for AI ethics and regulatory frameworks, especially concerning generative AI, cybersecurity and misinformation. It proposes sandboxes for regulatory testing, AI system transparency through public registries, and oversight for high-risk applications. But these frameworks are only as strong as their institutional backing. A Pakistan AI Ethics Council, independent of the state, should be formed to develop ethical standards, issue public audits and advise on human rights and human freedom implications, particularly in surveillance, policing and facial recognition.
Critically, the policy must confront the growing threat to Pakistan’s creative economy and public discourse posed by unregulated generative AI. Intellectual property (IP) laws in Pakistan remain outdated and unprepared for the challenges brought by AI-generated content. Without urgent upgrades, creators risk exploitation and institutions lack the authority to enforce attribution or copyright.
The policy should compel IP bodies to recognise AI-generated works, define ownership structures, and explore generative watermarking, a crucial tool to verify authenticity and counter deepfake-driven misinformation. This becomes especially urgent in an election cycle, where manipulated audio-visual content can hijack narratives and destabilise democratic institutions. Pemra, the Election Commission of Pakistan and media watchdogs must be empowered to enforce watermarking standards, and a national AI content governance policy must be enacted to regulate synthetic media responsibly.
Equity and decentralisation are also critical concerns. The policy acknowledges the need for inclusivity, but historically, federal tech policies have favoured urban centres like Islamabad and Karachi. AI strategies must be provincialised, with each region developing its own roadmap under the broader national framework. Grassroots AI projects, such as using computer vision for water management in Balochistan or NLP tools for preserving endangered languages in Gilgit-Baltistan, can bridge the digital divide and ground the AI revolution in local relevance.
Another blind spot is policy literacy. As AI transforms governance, health, education, and justice, the judiciary, legislators and regulators must be trained in its implications. The inclusion of AI Law and Ethics in legal education, judicial academies and bureaucratic training is not a luxury; it is a safeguard. Policymakers who regulate without understanding technology will be unable to anticipate its failures or realise its full potential.
Finally, monitoring and evaluation require much sharper tools. The policy outlines KPIs, including one million AI-skilled workers and 400 patented products by 2030, but past performance on tech implementation suggests that progress reports often disappear into bureaucratic ether. An AI Policy Monitoring Dashboard, open to the public, should track not just numbers but equity, gender, geography and public impact, making the system accountable to its citizens.
To be fair, this is a first-generation policy in an emerging digital state. It does not need to be perfect; it needs to be adaptive, enforceable, and nationally owned. The biggest mistake Pakistan could make is to treat this policy as a public relations document rather than as a living framework requiring constant iteration. Technology evolves fast. Policy must evolve faster.
Pakistan’s AI moment has arrived, not by accident, but by necessity. With the world becoming algorithmically governed, countries that fail to define their AI futures risk becoming digital colonies, importing models they don’t understand and data systems they can’t control. This policy, if shepherded with courage and competence, can allow Pakistan to build indigenous intelligence, not just artificial, but strategic, ethical and inclusive.
But time is short. The AI race rewards early movers, not late adopters. Pakistan must now prove that it can not only write a policy but also deliver it.
The writer is a public policy expert and leads the Country Partner Institute of the World Economic Forum in Pakistan. He tweets/posts @amirjahangir and can be reached at: aj@mishal.com.pk