A US federal judge in San Francisco has temporarily blocked the Pentagon from taking action against Anthropic, an AI company, providing short-term relief in its dispute with the Trump administration. The ruling prevents the enforcement of a stop-use order against Anthropic’s chatbot, Claude, while the case progresses. Anthropic has challenged the government’s labeling as a supply chain risk and restrictions on military and surveillance use. The judge criticized the government’s actions as arbitrary and an abuse of discretion. The case follows a breakdown in contract talks between Anthropic and the Pentagon, which led to the government labeling the company a national security supply chain risk. Anthropic argued that this was illegal First Amendment retaliation for criticizing the government’s position. The ruling allows Anthropic to continue defending its position in the legal battle and maintain its strong position in the enterprise AI market.

Leave a Reply