WASHINGTON (AP) — A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in a decision that differed from the conclusions reached in another judge’s ruling on the same issues.
The U.S. Court of Appeals in Washington, D.C., rejected Anthropic’s request for an order that would shield the San Francisco company from the fallout stemming from a dispute over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans while the panel is still collecting evidence about the case.
But the setback in Washington came after Anthropic already had prevailed in separate case focused on the same issues in San Francisco federal court. In that case, a judge forced President Donald Trump’s administration to remove a label tainting the company as a national security risk.
Anthropic filed the two separate lawsuits in San Francisco and the Washington appeals court last month, asserting the Trump administration was engaging in an “unlawful campaign of retaliation” because of its attempt to impose limits on how its AI technology can be deployed. The Trump administration blasted Anthropic as a liberal-leaning company trying to dictate U.S. military policy.
In the San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk unqualified to work with military contractors and issuing other directives that could cripple a company locked in a race for AI supremacy against rivals such as ChatGPT maker Open AI and Google.
That decision prompted the Trump administration to remove the stigmatizing labels from Anthropic and take other steps clearing the way for government employees and contractors to continue using Claude and other chatbots, according to court filing made in San Francisco earlier this week.









