US Pentagon declares Anthropic a national security supply chain risk amid AI dispute

New Delhi: Anthropic has officially been designated as a supply chain risk by the U.S. Department of Defence, an action that may prevent the startup company from conducting business with the U.S. government. The move comes after a mounting conflict between the Pentagon and the company on the usage of its AI technology in military and national security missions, reported by The New York Times.

Anthropic CEO Dario Amodei was able to confirm that the company was formally informed by the Pentagon on Thursday. The company responded by saying that it has intended to appeal the ruling in a court of law, citing that the designation is not legally warranted.

Pentagon labels Anthropic a national security risk

The designation of the Pentagon implies that the contractors, suppliers, or partners of the US military might not be allowed to conduct a commercial activity with Anthropic. Defence Secretary Pete Hegseth announced the ruling after talks between the two parties collapsed before a government deadline.

‘Supply chain risk’ is the term that is usually applied to companies which are suspected to have connections with foreign enemies, especially China. Using it on an AI company located in the USA is an odd move and an indicator of the seriousness with which the Pentagon takes the dispute.

Dispute over military use of AI technology

The conflict began over how Anthropic’s AI systems could be used on classified defence networks. The Defence Department wanted the freedom to use the technology for any lawful purpose related to national security operations.

Anthropic, however, sought restrictions to prevent its systems from being used for domestic surveillance of Americans or in autonomous lethal weapons. Pentagon officials reportedly rejected those limitations, arguing that private companies cannot dictate how military tools are used once deployed in national security environments.

AI already used in military operations

According to people familiar with the matter, Anthropic’s technology has been used by the U.S. military to analyse large amounts of data and imagery. The system helps military planners determine where to deploy forces or conduct strikes.

Even after the dispute, Anthropic said it would continue supporting the Pentagon’s transition away from its technology. The company stated it would provide its models and engineering assistance at a nominal cost while the defence department moves to alternative systems.

Other AI firms move in

Several AI companies are already positioning themselves to replace Anthropic in defence contracts. Both OpenAI and Elon Musk’s xAI have signed agreements with the Pentagon to provide AI systems for classified government use.

OpenAI recently reached a deal allowing the Pentagon to use its AI models for any lawful purpose. Following criticism from civil liberties advocates, the company later updated the agreement to include additional safeguards aimed at preventing mass surveillance of Americans.

Anthropic said it will continue discussions with the defence department but is prepared to take legal action over the supply chain risk designation. The case could become a major test of how AI companies and governments negotiate the rules around military use of advanced technology.