The U.S. Department of Defense and major artificial intelligence company has been indulge in a new dispute recently.
Pentagon is in a disagreement with Anthropic known for its Claude AI models over safety concerns.
The conflict centers on how the military should be allowed to use Anthropic’s AI tools.
As reported by Reuters, Axios shared a report that cited that the Pentagon wants AI companies—including Anthropic, OpenAI, Google, and xAI—to let the U.S. military use their AI systems for “all lawful purposes”, such as weapons development, intelligence gathering, battlefield operations, and other sensitive applications.
While Anthropic has refused to remove all usage restrictions on its AI, arguing that some safeguards are necessary—especially limits on fully autonomous weapons or surveillance that could be used to infringe on privacy or safety.
An Anthropic spokesperson said the company had not discussed the use of its AI model Claude for specific operations with the Pentagon.
The spokesperson said conversations with the U.S. government so far had focused on a specific set of usage policy questions, including hard limits around fully autonomous weapons and mass domestic surveillance, none of which related to current operations.
Anthropic's AI model Claude was used in the U.S. military's operation to capture former Venezuelan President Nicolas Maduro, with Claude deployed via Anthropic's partnership with data firm Palantir, the Wall Street Journal reported on Friday.
Reuters reported on Wednesday that the Pentagon was pushing top AI companies including OpenAI and Anthropic to make their artificial intelligence tools available on classified networks without many of the standard restrictions that the companies apply to users.
This disagreement comes amid broader discussions in Washington over AI safety, ethical boundaries, and how commercial AI should be integrated into government and military systems.