Donald Trump stated in his message on Truth Social that government agencies, including the Pentagon, have six months to cease using Anthropic's products. Defense Secretary Pete Hegset later clarified on platform X that Anthropic will be considered a "supply chain risk." This designation is typically applied to companies perceived as extensions of foreign adversaries.
This move concludes a week of negotiations between the government and Anthropic, which could significantly impact the future application of artificial intelligence technologies. The conflict between the company and the Pentagon arose due to restrictions imposed on Anthropic's popular AI model.
On Friday, Anthropic released a statement indicating that it considers Trump's decision legally unfounded and a dangerous precedent for American companies working with the government. The company emphasized that neither the threat nor sanctions from the Department of Defense would change their position regarding the use of technologies in mass surveillance or fully autonomous weapons, and they are prepared to challenge any decision recognizing a supply chain risk in court.
The Pentagon, which uses Anthropic's AI system Claude in its classified networks, wants to apply it for all legitimate purposes. However, Anthropic has established two "red lines": Claude must not be used in autonomous weapons or for mass surveillance of U.S. citizens.
The Pentagon argues that it needs the freedom to use licensed technologies and is not interested in restrictions from the company. The conflict reached its peak on Tuesday during an important meeting at the Pentagon between Hegset and Anthropic CEO Dario Amodei. Although the meeting was held in a friendly atmosphere, Trump's comments on Friday indicate a shift in the situation.
On Thursday, Anthropic confirmed that it does not intend to concede to the Pentagon's demands, stating that "threats do not change our position: we cannot in good conscience comply with their request."
Deputy Under Secretary of Defense for Research and Engineering Emil Michael told Bloomberg in an interview that they were "in the final stages" of a deal with Anthropic that met their requirements when the company made its statement on Thursday.
Pentagon spokesperson Sean Parnell noted that their request is a simple and sensible measure that will prevent threats to critical military operations and risks to service members. "We will not allow any company to dictate the terms of operational decisions," he added.
On Friday, Trump accused Anthropic of a "catastrophic mistake" and attempting to dictate how the military should operate. Shortly after, the General Services Administration announced its intention to remove Anthropic from the USAi.gov platform, which is intended for testing artificial intelligence tools.
Hegset confirmed that "no contractor, supplier, or partner working with the U.S. armed forces" will be allowed to interact with Anthropic.
During the week, representatives from the artificial intelligence industry largely supported Anthropic, and OpenAI CEO Sam Altman stated that he shares the company's concerns regarding cooperation with the Pentagon.
Responses from Anthropic and OpenAI to CNN's request for comments were not received.
What work was Anthropic doing with the Pentagon?
The AI model Claude from Anthropic became the first to be implemented in classified military networks. Last summer, the company signed a contract with the Pentagon worth up to $200 million, while other major AI developers, such as OpenAI, were only working with unclassified networks.
The contract with Anthropic included a "permissible use policy" prohibiting the use of the Claude system for mass surveillance and autonomous weapons.
"This conflict has arisen at an inopportune time, as, to my knowledge, users in the Department of Defense are satisfied with Anthropic and its product Claude, and the usage restrictions have never been applied," said Gregory Allen, a senior advisor at the Center for Strategic and International Studies, on Bloomberg Radio.
However, the Pentagon does not want to be constrained by the terms of any company. A department representative emphasized that "tactical operations cannot be managed based on exceptions," and "the responsibility for compliance with the law rests with the Pentagon as the end user."
The Pentagon seeks to avoid a situation that threatens national security, where it must obtain permission from a company and violate its own rules.
Severing ties with Anthropic could pose a problem for the Pentagon, as replacements will be needed for all internal systems using Claude. Although a Pentagon representative mentioned that Elon Musk's AI system Grok "could be used in classified settings," it is not considered as powerful as Claude.
How will this affect Anthropic's business?
While losing the $200 million contract does not pose an existential threat to Anthropic, whose market capitalization is estimated at around $380 billion, a more significant risk is the possibility that any company working with the U.S. military will have to prove it has no ties to Anthropic.
Anthropic's success largely depends on contracts with large corporations, many of which may have ties to the Pentagon.
"This means that a significant portion of Anthropic's customer base could disappear if they have government contracts or wish to enter into such contracts in the future," explained Adam Conner, vice president of technology policy at the Center for American Progress, a think tank in Washington.
Jensen Huang, CEO of Nvidia, the largest chip manufacturer for AI, expressed hope for a possible agreement between the Pentagon and Anthropic but added that "this is not the end of the world," as there are other AI companies with which the Pentagon can collaborate.
The Pentagon is also considering the possibility of compelling Anthropic to cooperate under the 1950 Procurement Act, which grants the president significant powers to oversee the industry. However, it is unclear how the Pentagon can simultaneously apply this law and recognize Anthropic as a supply chain threat.
Trump did not specify whether a deferred prosecution agreement would be applied.
"Anthropic is not the only company under threat," noted Conner. "The Pentagon's actions send a signal to other companies that want to profit by selling their services to the government."
"This also implies that other AI companies in negotiations should be cautious about restricting the use of their technologies," he added.
If the Pentagon were simply dissatisfied with Anthropic's terms, it could terminate the contract and enter into an agreement with another company, said Alan Rosenstein, a law professor at the University of Minnesota.
"The government really wants to continue using Anthropic's technologies, and it is using all possible levers of influence," he added. "This is a very powerful lever of influence."
It is unclear how the military will replace systems developed by Anthropic and whether the administration plans to take further steps in this direction.
"Compare a domestic leader in AI with the current situation, where the White House talks about an AI race with China, which is analogous to the space race during the Cold War with the Soviet Union. You wouldn't want to lose one of the jewels of your industry over such disagreements," Allen concluded on Bloomberg.
"There is a more sensible way to resolve this conflict than to maintain the hardline position taken by the administration."