Over the past decade, a remarkable shift has occurred in the world of technology. The biggest names in artificial intelligence have quietly crossed from Silicon Valley boardrooms into military command centres.
What began as hesitation, even protest, has transformed into eager collaboration. The world is now witnessing the emergence of a new ‘military-industrial AI complex’, one that is reshaping both the battlefield and the balance of power between nations.
Not long ago, major tech firms resisted the idea of supplying AI to militaries. In 2018, for instance, Google employees staged an unprecedented protest against the company’s involvement in Project Maven, an initiative using AI to analyse drone footage. The backlash forced Google to withdraw from certain contracts and issue AI principles that discouraged direct weapon applications. That moment seemed to signal a red line: Silicon Valley would innovate for consumers, but not for combat.
That red line has since faded. But that line has blurred. Today, Microsoft, Amazon, Meta and even OpenAI are all pitching AI solutions to militaries. From logistics and intelligence analysis to surveillance and autonomous systems, the tech world is increasingly treating war as just another market.
Governments, for their part, are eager customers. Billions are being poured into AI-driven defence projects such as swarms of autonomous drones, battlefield decision-support tools and even electronic warfare powered by machine learning. The line between civilian innovation and military research is blurring, giving way to a powerful fusion of corporate ambition and state power.
The problem? Our laws are not keeping up. The Geneva Conventions were written for human decision makers, not algorithms. There are no binding global rules on AI in warfare, only vague expectations that countries will “ensure compliance” with humanitarian law. But when AI makes a deadly mistake, who is to blame? The programmer, the military commander or the machine itself? This murky accountability serves both governments and corporations well, shielding them from responsibility.
For companies, the loophole is even bigger. International law binds states, not corporations. At best, tech giants have voluntary ‘AI principles’ and ethics charters. Nice words, but with no teeth. And in a world where billion-dollar defence contracts are on the table, profit often speaks louder than principles.
Some solutions are on the table. Unesco’s 2021 AI Ethics guidelines call for transparency and fairness, and experts are pushing for stricter rules of engagement that guarantee human oversight. But none of these are binding. It’s up to states and companies to ‘do the right thing’.
This is why voices from the Global South matter. In April 2025, Pakistan told the UN that unregulated autonomous weapons could destabilize the world. It urged a collective response, warning that countries without their own AI industries risk becoming both customers and victims of foreign built military systems. It’s a stark reminder that AI driven warfare won’t just deepen inequalities it could spark new arms races in regions least able to bear them.
We’ve been here before. The nuclear arms race forced the world to create the Nuclear Non-Proliferation Treaty, setting rules for one of the most destructive technologies ever built. But with AI, no such treaty exists. At best, the UN is still debating phrases like “meaningful human control” while research and deployment race far ahead.
So what now? If the “military industrial AI complex” is here to stay, two things are urgent. First, companies must face binding obligations, mandatory human rights due diligence, public risk assessments, and penalties for ignoring them. Second, international law must evolve so that states regulate not just their militaries but also the tech corporations under their watch.
Otherwise, tomorrow’s wars will be fought with weapons designed in corporate labs, optimised for efficiency and profit but detached from human accountability. The stakes are too high to leave this future unregulated.
The pressing question is whether we can establish boundaries in time, or whether technology and the companies driving it will define them for us.
The writer is an international law expert. She can be reached at: iqrabanosohail@gmail.com