"*" indicates required fields

Is the Pentagon Overstepping with its Threats to Anthropic? Department of Defense photo.

Is the Pentagon Overstepping with its Threats to Anthropic?

share this

A battle over the ethical use of artificial intelligence in the military is underway between the U.S. Department of Defense and AI firm and contractor Anthropic. The Pentagon has threatened multiple coercive measures against the Claude provider, attempting to force it to drop its guardrails over the military’s use of its AI models. Such a strategy is unwise, eroding essential norms for the use of tools like the Defense Production Act and the supply chain risk designation while undermining trust between AI executives, talent, and the Department of Defense.

Anthropic has cast itself as an AI firm that considers morality at the heart of its work. It was founded by a group of former OpenAI researchers who were concerned about the speed of AI development without sufficient safety guardrails. At the same time, it is the only frontier AI firm to have its models deployed in the Pentagon’s classified networks. In a February 26th statement, Anthropic CEO Dario Amodei outlined his belief that AI should be used to defend national security and combat autocratic adversaries but advocated for restricting the use of AI in missions that undermine democratic values, namely those involving mass surveillance and autonomous weapons. Amodei stated that these use-cases for AI were not included in Anthropic’s contracts with the DoD.

The Department of Defense’s implementation of Anthropic products came under scrutiny after reports surfaced that Claude was used in its January operation to capture Nicolas Maduro. Soon after, Defense Secretary Pete Hegseth issued a memo instructing high-level procurement officials to incorporate an “all legal use” clause into current and future AI contracts. Allegedly, Anthropic subsequently requested that the DoD impose limits on the use of its models. The Pentagon then gave Anthropic an ultimatum – the company would need to drop its preferred guardrails on its product use or have its $200 million contract canceled.

In an extra step, the DoD also threatened to use the Defense Production Act (DPA) to compel Anthropic to share its products with the Pentagon. The DoD also told Anthropic that it could be labeled a supply chain threat under 10 U.S.C. § 3252, which would bar other defense contractors from doing business with Anthropic. This designation has only been used for companies based in foreign adversary countries, like Russia’s Kapersky Lab or China’s Huawei. On February 26, Amodei responded in a statement that it would not concede to the DoD’s demands.

The Pentagon is not using these policy tools for their intended purposes, but as coercive weapons to force Anthropic to change its behavior. The misuse of the DPA and the supply chain risk designation erodes norms for proper implementation of the Pentagon’s policy tools, risking excessive use of the DoD’s power over private industry. Such an approach contradicts the United States’ free market values.

The Pentagon’s threats also undermine broader trust between AI executives, AI talent, and the DoD. Amodei is not alone in worrying about how the Pentagon might use AI; Jeff Dean, lead of Google’s AI division, raised concerns about AI’s use for mass surveillance and autonomous weapons this past week. OpenAI CEO Sam Altman stated in a staff memo yesterday that OpenAI would share Anthropic’s red lines in contract negotiations with the DoD. Additionally, the Department’s pressure may alienate the talent generating leading AI products. Hundreds of Google and OpenAI staff members signed a public letter on February 26 demonstrating solidarity with Anthropic, and over 100 Google employees sent a similar letter to company executives requesting that Gemini be prohibited from use in the surveillance of U.S. citizens or for autonomous weapons.

If the Pentagon forces Anthropic to share its technology through the DPA or by threatening it with a supply chain threat designation, it may see short-term strategic success in military operations. One Pentagon official told Axios that “the only reason we’re still talking to [Anthropic] is we need them and we need them now. The problem for these guys is they are that good.” However, the Pentagon’s use of coercive measures against Anthropic erodes important norms for the DoD’s policy tools and undermines trust between the DoD and Silicon Valley. Instead, the Pentagon should refrain from weaponizing its policy levers against private industry partners and return to constructive dialogue with its contractors.