Anthropic has filed two federal lawsuits against the Trump administration, challenging the Pentagon’s decision to designate the San Francisco-based artificial intelligence company a “supply-chain risk,” a move the company calls an “unlawful campaign of retaliation.”
Anthropic filed the lawsuits on Monday (March 9), one in a California federal court and another in the federal appeals court in Washington, D.C. Each suit challenges a different aspect of the Pentagon’s actions against the company.
The dispute centers on Anthropic’s refusal to allow unrestricted military use of its AI chatbot, Claude. The company says it has always prohibited Claude from being used for two things: mass surveillance of Americans and fully autonomous weapons — that is, weapons that can select and attack targets without a human making the call. Secretary of War Pete Hegseth and other administration officials publicly demanded that Anthropic accept “all lawful uses” of Claude, and threatened consequences if the company refused.
President Donald Trump also directed federal agencies to stop using Claude, though he gave the Pentagon six months to phase out the product. That timeline reflects how deeply embedded Claude is in classified military systems, including those used in the Iran war.
“These actions are unprecedented and unlawful,” Anthropic’s lawsuit states. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
Anthropic is seeking a temporary restraining order to continue its government sales while the lawsuit plays out. The company proposed that the government respond to that request by 9 p.m. PT on Wednesday, with a judge’s hearing on Friday.
The supply-chain-risk designation is a serious blow to Anthropic’s business.Over 500 customers pay Anthropic at least $1 million annually for Claude, and the company projects $14 billion in revenue this year — most of it from businesses and government agencies using Claude for tasks like computer coding. The company, which is privately held, was recently valued at $380 billion.
Historically, supply-chain-risk designations have been used to keep foreign — especially Chinese — technology out of U.S. military systems. This marks the first time the federal government is known to have applied the label to an American company. A coalition of major tech industry groups, including those representing Apple, Google, Nvidia, and Microsoft, urged the Trump administration to reconsider, warning the move would chill American innovation.
Anthropic said in a statement that “seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
Recent Comments