Anthropic wins reprieve against US DoD ban, buying time for contractors to assess AI supply chains
Computerworld NZ

Anthropic wins reprieve against US DoD ban, buying time for contractors to assess AI supply chains

The Pentagon’s attempt to brand Anthropic a supply chain risk was “likely both contrary to law and arbitrary and capricious,” a US federal judge wrote in a ruling halting a ban on use of Anthropic’s products in defense contracts. In granting Anthropic a preliminary injunction against the ban, US District Judge Rita Lin of the US District Court for the Northern District of California delivered a legal setback for the Department of Defense and complicated plans by other agencies to remove Anthropic from federal systems. She also took aim at the scope of the Pentagon’s directive, which effectively sought to extend beyond internal procurement decisions and into the wider private sector contractor ecosystem. At issue was a sweeping restriction under which “no contractor… may conduct any commercial activity with Anthropic,” a move that would have forced companies doing business with the DoD — and other government agencies that have announced their own Anthropic bans — to unwind relationships or risk federal revenue. Judge Lin made clear that this kind of ecosystem-wide restriction raised serious legal concerns, particularly where it would force contractors to sever commercial ties beyond the scope of federal work. Following the ruling, Anthropic issued a statement saying, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.” Court questions rationale behind ‘supply chain risk’ label The Pentagon banned Anthropic in late February , after the company resisted allowing its Claude AI model to be used for certain military applications, including autonomous weapons and domestic surveillance. The Department of Defense took the unprecedented step of formally designating the company a “supply chain risk,” a categorization that has previously only applied to foreign companies. The designation triggered a cascade of consequences for Anthropic: federal agencies halted usage, contractors distanced themselves, and enterprise customers began reassessing their exposure. Judge Lin was openly skeptical of the Pentagon’s approach, suggesting it looked less like a narrowly tailored national security measure and more like an “attempted corporate murder.” “It is the Department of War’s prerogative to decide what AI product it uses,” Lin wrote, using the Department of Defense’s secondary name . “Everyone, including Anthropic, agrees that the Department of War may permissibly stop using Claude and look for a new AI vendor who will allow ‘all lawful uses’ of its technology.” But the court took issue with the government’s reasoning, rejecting the idea that Anthropic’s restrictions on uses such as mass surveillance or autonomous weapons justified branding it a supply chain threat. “Defendants’ designation of Anthropic as a ‘supply chain risk’ is likely both contrary to law and arbitrary and capricious,” Lin wrote. “The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.” She added, “At oral argument, government counsel suggested that Anthropic showed its subversive tendencies by ‘questioning’ the use of its technology, ‘raising concerns’ about it, and criticizing the government’s position in the press. Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government.” What the ruling means for the private sector The ruling interrupts an active compliance timeline already imposed on both federal agencies and their contractors. A March Pentagon memo had already directed companies doing business with the DoD to begin removing Anthropic technology, with a full phase-out expected within months. The injunction halts that process — but only temporarily. For federal contractors, the pause is operationally significant, but it functions more as a buffer than as relief. Many had already begun auditing systems, mapping dependencies, and preparing attestations tied to contract obligations. That work should continue. If a higher court reverses the injunction, or the Pentagon revises its designation to address the court’s concerns, the same obligations could return quickly — potentially with clearer legal footing and less room to challenge them. More broadly, the case signals that supply chain risk in the AI era is no longer limited to compromised code or foreign ownership. It now extends to how a vendor governs the use of its own technology — and whether that governance aligns with federal priorities. If a higher court reverses the injunction, or if the Pentagon revises its designation to address the court’s concerns, the same obligations could return quickly, potentially with a clearer legal footing and less room for challenge. The next move, whether by an appellate court or a revised Pentagon directive, will determine how far federal authority regarding AI contracts ultimately reaches. In the meantime, contractors are left operating in a narrow window of uncertainty, where the safest course is to assume the pressure will return and prepare accordingly. This article first appeared on CIO.com.

Go to News Site