New Delhi: The Pentagon’s dispute with artificial intelligence company Anthropic stems from a deeper disagreement over how AI should be used in future U.S. weapons systems, according to a senior defence official. The clash emerged during internal discussions about integrating AI into President Donald Trump’s proposed Golden Dome missile defence programme, a project that could include space-based weapons and advanced autonomous systems.
U.S. Defence Undersecretary and Pentagon Chief Technology Officer Emil Michael said Anthropic’s restrictions on the military use of its AI models created friction with the Defence Department’s push to expand autonomous capabilities. The U.S. military is increasingly exploring the use of AI in drone swarms, underwater vehicles and other automated defence technologies to keep pace with strategic competitors such as China.
AI restrictions spark Pentagon concerns
Michael said Anthropic’s policies limiting the use of its chatbot Claude in fully autonomous weapons systems made the company a difficult partner for the Pentagon. He argued that the military needs technology providers willing to support long-term AI autonomy in defence systems.
Speaking on a podcast aired Friday, Michael said the Pentagon requires partners that can reliably support the development of autonomous technologies. He noted that the military is already seeing early versions of these systems and expects them to become more important in future conflicts.
Anthropic labelled a supply chain risk
The disagreement escalated when the Pentagon officially designated the San Francisco-based AI company as a “supply chain risk”. The designation effectively blocks Anthropic from participating in defence projects and limits its ability to collaborate with other military contractors.
The move was made under a rule intended to protect national security systems from potential vulnerabilities that could be exploited by foreign adversaries.
Anthropic plans legal challenge
Anthropic has launched a stiff opposition against the decision and indicated that it will be taking the U.S. Department of Defence to court over the name. The company cites that the action is unjustly harmful to its relationships with the defence sector and limits its operations.
The controversy has also extended to wider applications of the AI technology at Anthropic by the government agencies.
Trump orders phase-out of Claude
President Donald Trump ordered the federal agencies to promptly cease utilising Claude, the AI assistant of Anthropic. Nonetheless, the Pentagon has been allowed six months to completely withdraw the system since it is already incorporated in some of the military classified systems.
There are reports that some of these systems were deployed in the activities involving the war in Iran, and this shows the extent to which AI tools have been integrated into the contemporary military infrastructure.
The dispute between the Pentagon and Anthropic highlights an increasingly controversial topic regarding the use of artificial intelligence in war. With the rising pace of development of autonomous defence technologies by governments, the risks of conflict between ethical AI policies and military strategy are bound to grow.