The OpenAI co-founder supported Anthropic’s position in principle while acknowledging the government’s concerns about a private company having control over national security issues of significance.
- Altman also said that OpenAI is discussing the Department of War potentially using the AI startup’s models while maintaining the safety guardrails that resulted in a standoff between Anthropic and the Pentagon.
- He stated that he would seek a deal to cover any use of its models, except for those that are unlawful, such as domestic surveillance and autonomous offensive weapons.
- Altman also said he would like to try to de-escalate the tensions between Anthropic and the Pentagon to prevent the setting of dangerous precedents for the rest of the industry.
OpenAI co-founder and CEO Sam Altman reportedly expressed support for rival Anthropic’s position on military AI safeguards despite tensions with its co-founder Dario Amodei, while saying OpenAI is discussing a potential agreement with the Pentagon to deploy its models.
According to a report by The Wall Street Journal, Altman supported Anthropic’s position in principle while acknowledging the government’s concerns about a private company’s control over national security matters of significance.
“We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines,” he wrote in a memo to staff, according to the report.
This comes amid OpenAI’s latest funding round of $110 billion that values the company at $730 billion pre-money. The funding saw participation from Amazon.com Inc. (AMZN), Nvidia Corp. (NVDA), and SoftBank Group.
Retail sentiment on Stocktwits around OpenAI trended in the ‘neutral’ territory at the time of writing.
Altman Says OpenAI Discussing Pentagon’s Use Of Its Models
Altman also said that OpenAI is discussing the Department of War potentially using the AI startup’s models while maintaining the safety guardrails that resulted in a standoff between Anthropic and the Pentagon.
“We are going to see if there is a deal with the DoW that allows our models to be deployed in classified environments and that fits with our principles,” he said, according to the report.
Altman stated that he would seek a deal to cover any use of its models, except for those that are unlawful, such as domestic surveillance and autonomous offensive weapons.
He also said he would like to try to de-escalate the tensions between Anthropic and the Pentagon to prevent the setting of dangerous precedents for the rest of the industry.
This is despite an ongoing rift between Altman and Anthropic, with the OpenAI CEO slamming Anthropic’s Super Bowl ads earlier this month, calling them “dishonest” in a post on X.
“I guess it’s on brand for Anthropic doublespeak to use a deceptive ad to critique theoretical deceptive ads that aren’t real, but a Super Bowl ad is not where I would expect it,” he said.
The Anthropic-Pentagon Standoff
Anthropic announced on Thursday that it refuses to accede to the Pentagon’s request to use the company’s AI models for all lawful uses.
“They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal,” the company said.
Anthropic stated that if the Pentagon chooses to offboard the company, it will work to enable a smooth transition to another provider.
Altman Calls For ‘Urgent’ Regulation Of AI
Altman called for “urgent” regulation of artificial intelligence technology at the India AI Impact Summit 2026 earlier this month, according to a Hindustan Times report.
“Democratisation of AI is the best way to ensure humanity flourishes,” he said, while adding that the centralization of the technology in one country or company could lead to ruin.
For updates and corrections, email newsroom[at]stocktwits[dot]com.<