As AI continues to advance and become more prevalent in the inner workings of the government, so do the ethical concerns grow with how and for what it is used. Last year, Anthropic—an AI company focused on building reliable, interpretable, and steerable AI systems—signed a $200 million contract with the Department of “War” (formerly known as the Department of Defense) to integrate its models into mission workflows on classified networks. However, as negotiations dragged on later in the year and into 2026, the two sides disagreed on how Anthropic’s AI model—named Claude—may be used in defense and surveillance operations. After months of continued negotiations, Secretary of War Pete Hegseth declared Anthropic as a supply chain risk, citing that the DoW must have “full, unrestricted access” to Anthropic’s models for “every lawful purpose in defense of the Republic.”
The move comes after Anthropic refused to waiver in the two exceptions they required Claude not be used for: mass domestic surveillance and fully autonomous weapons. Anthropic argues that powerful AI, such as Claude, would make it possible to automatically assemble widely dispersed and scattered data into a comprehensive profile of any individual’s life at a massive scale. Such surveillance, they rightfully assert, is a massive violation of key fundamental rights. They also argue that such surveillance is only considered “lawful” because the law has not caught up with the ever-changing and ever-expanding capabilities of AI. Furthermore, Anthropic says that current AI systems are not reliable enough to power fully autonomous weapons, and that putting such technology into use would put both civilians and U.S. soldiers at risk—especially without proper guardrails in place. Even more critically, Anthropic mentioned it has offered to work with the DoW to improve the reliability of such systems, an offer which has gone unaccepted.
Hegseth claimed that Anthropic’s true objective was to seize veto power over the operational decisions of the military, and that Anthropic’s stance is “fundamentally incompatible” with American principles. At the end of the post, he formally directed that no contractor, supplier, or partner that is in business with the U.S. military may conduct any commercial activity with Anthropic. In an interview with POLITICO, Trump said that “Anthropic is in trouble because I fired [them] like dogs, because they shouldn’t have done that.” Framed in this way, the designation of supply chain risk seems to be blatantly retaliatory, as opposed to being based on any legitimate worry of safety.
Anthropic contends that such a blanket ban is a power not legally granted to Hegseth. Instead, they say that a supply chain risk designation under 10 USC 3252 can only prohibit the use of Claude within DoW contracts, and cannot affect how contractors use Claude with other customers. Hegseth is also required to use the least restrictive means necessary, which limits how broadly the designation can be applied, if it is found to be legal. Anthropic CEO, Dario Amodei, has confirmed that the company has no choice but to challenge the designation in court.
“As we stated last Friday, we do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision-making—that is the role of the military,” Amodei wrote. “Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas, and not operational decision-making.”
Anthropic is now the only American company to ever be named a supply chain risk, as the clause was initially intended to be reserved for companies and organizations with ties to foreign adversaries, such as the Chinese tech company Huawei. The declaration has sent shockwaves throughout the tech industry, and on March 4th, the Computer & Communications Industry Association issued a formal letter to Trump, titled “Advancing American AI Innovation.” The organization is backed by the four top tech lobbyist groups, and it urges Trump to reconsider the designation.
The letter begins with a recognition of the DoW “unique requirements” for military integration, and an applauding of the administration’s decisions to advance AI innovation through data center development and federal procurement guidance. The focus then shifts to warning Trump of the unintended consequences that may come about from the designation. CCIA states that by treating an American tech company as a foreign adversary, as opposed to an asset, the action will have a chilling effect on U.S. innovations, undermine the administration’s efforts to promote American AI abroad, and embolden China’s efforts to export its own AI to fuel its own military and economic ambitions. They further express concern over the precedent this sets for the technology sector as a whole and how this will affect the U.S. position as a global leader in AI.
Following the blacklisting, major government contractors, including Lockheed Martin, have already begun to remove Claude from their networks, and many defense tech companies have already told their employees to switch to other artificial intelligence models and assistants. The move is almost certain to significantly hurt Anthropic, which gets about 80% of its revenue from such large-scale companies.
Within hours of Hegseth designating Anthropic as a supply chain risk, OpenAI CEO Sam Altman announced a deal with the DoW. Updated March 2nd, OpenAI asserts that the agreement has “more guardrails than any previous agreement … including Anthropic’s,” and will include:
- No use of OpenAI technology for mass domestic surveillance
- No use of OpenAI technology to direct autonomous weapons systems
- No use of OpenAI technology for high-stakes automated decisions
The first two clauses may seem familiar, and that would be because they were the exact two clauses that the DoW rejected with Anthropic. Such an agreement makes clear that one of two things must be happening: either the government had some uncited issue with Anthropic that it was disguising by refusing the deal on the basis of those two clauses, or the contract with OpenAI still grants loopholes that would allow the government to participate in mass surveillance or autonomous weapons systems.
Taking a closer look at the available language, multiple critics have already argued that the contractual language does indeed grant significant loopholes. For instance, the autonomous weapons restriction specifically says that the DoW may use the AI System for all lawful purposes, and that it will not be used to independently direct autonomous weapons in any case where “law, regulation, or Department policy requires human control.” While there is a current regulation in place under Department of Defense Directive (DODD) 3000.09, initially issued in 2012 but updated by then-Deputy Secretary of Defense Ashton Carter in 2023, that expressly prohibits the use of lethal autonomous weapon systems, DODDs may establish policy and delegate authority with the approval of the Secretary or Deputy Secretary of Defense. This means that at any point in the near or far future, it would be incredibly easy for Hegseth to initiate changes as to what military actions require human oversight, which could immediately legalize the use of autonomous kills under the OpenAI agreement.
This is just the prospective future of the new OpenAI deal. Claude has not yet been phased out of the U.S. military’s operations, and will not be until a replacement is found for the specific intended operations. Even now, the U.S. military is leveraging its AI targeting tools — including Claude — in the conflict with Iran. In the first 24 hours, the military’s AI targeting tools struck over 1,000 targets. Claude’s partnership with Palantir, a software company heavily utilized by the government for defense and military operations, uses such tech to provide real-time targeting for missions in Iran, and Claude has been used to propose hundreds of strike targets, prioritize them in order of importance, and provide location coordinates for them.
Such technology use poses a great concern for questions surrounding autonomous killings and the ethicality surrounding them. It is clear that, at the moment, regardless of such ethical concerns, the technology is not logistically ready to be fully autonomized. Moreover, the designation of Anthropic as a supply chain risk seems to be clearly motivated by political retaliation rather than by genuine concern, and such classification of American companies sets a dangerous precedent and establishes a clear message to all other companies that do not completely succumb to the demands of the Trump administration.