OpenAI says its US defense deal is safer than Anthropic’s, but is it?

OpenAI has struck a deal to supply the US government with AI services, announcing it hours after US President Donald Trump’s decision on Friday to ban its AI rival Anthropic from all US government contracts. Sam Altman, CEO of OpenAI, said of the negotiation, “ It was definitely rushed, and the optics don’t look good, ” in a post on X on Sunday. Anthropic was banned for its refusal to let its product be used for mass surveillance of US citizens or in fully autonomous weapons; OpenAI said its deal contained the same limitations, raising questions about how it had managed to secure such concessions so quickly — or indeed whether it had secured them at all. The US government wanted Anthropic to allow the use of its technology for all lawful purposes. Anthropic agreed, save for those two exceptions , and on Friday its intransigence got it banned. On Saturday, OpenAI called for the government to offer deals to other AI companies on the same terms it had agreed to, saying in a blog post: “ We think our agreement has more guardrails than any previous agreement for classified AI deployments, including Anthropic’s.” Redlining “We have three main red lines that guide our work with the DoW, which are generally shared by several other frontier labs,” the company said, using the Department of Defense’s secondary name, the Department of War. Those red lines, it said, are no use of its technology for mass domestic surveillance; no use of its technology to direct autonomous weapons systems, and no use of its technology for high-stakes automated decisions such as “social credit” systems. So did OpenAI really succeed where Anthropic failed? And if so how? OpenAI protects its red lines through “a more expansive, multi-layered approach,” it said in the Saturday blog post. “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections. This is all in addition to the strong existing protections in U.S. law.” It cited the part of the contract offering those protections: “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols. The AI System will not be used to independently direct autonomous weapons in any case where law, regulation, or Department policy requires human control, nor will it be used to assume other high-stakes decisions that require approval by a human decisionmaker under the same authorities.” That only imposes limits on the use of OpenAI technology to control autonomous weapons where the law or regulations forbid it; where laws and regulations are silent on the matter, it offers no protection. “I’m not surprised that OpenAI has given DoW what DoW wanted. I am a little surprised that so many observers appear to be under the impression that the contract snippet OpenAI has published guarantees something significantly different from ‘all lawful use,’” wrote Charlie Bullock , a senior research fellow at independent think tank Institute for Law and AI, said in a post on X . As for mass domestic surveillance, the contract terms cited by OpenAI said, “For intelligence activities, any handling of private information will comply with the Fourth Amendment , the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978 , Executive Order 12333 , and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.” OpenAI’s guardrails may not be enough However, lawyers are not convinced that the contractual language that OpenAI quotes in the blog post is enough to stop the DoD from actually carrying out domestic mass surveillance in the US. The contractual language that OpenAI revealed allow agencies like the NSA, which is a part of the DoD, to engage in bulk domestic metadata collection under existing legal powers, said Pranesh Prakash , principal consultant at law and policy advisory firm Anekaanta. Bullock wrote that he was unaware of a precise legal definition of mass domestic surveillance, saying there are things that a sufficiently advanced AI system could do that would be legal but would also qualify as mass domestic surveillance under at least some reasonable definitions of that term. There are other ambiguities as well, Bullock noted, saying that the legal picture remains fuzzy, largely because OpenAI hasn’t published the full contract. If the contract expressly limits surveillance or weapons related deployment, said Anandaday Misshra , founder of Amlegals, a law firm specializing in AI regulatory intelligence and data protection, “OpenAI can rely on standard contract law principles to enforce those limits, at least as between the parties.” However, said Misshra, once the US government is involved, particularly through an entity like the DoD, OpenAI’s ability to prevent certain uses becomes more fragile: “National security carve-outs, classified programs, and sovereign immunity doctrines significantly weaken any attempt to challenge government use purely on ethical grounds. Courts have historically shown deference where national security is invoked, even when private contractors object,” he said. There have been “similar tensions” in past disputes involving telecom surveillance and defense contracting, where companies relied on contractual language and internal governance but “ultimately had limited leverage once federal authorities asserted statutory powers,” he said. Unfavorably for OpenAI, there is no clean precedent where a technology provider successfully blocked the federal government from using a tool for security or defense purposes once access was contractually granted, Misshra said, adding that national security exceptions would likely override softer commitments around ethical AI use in the case of a direct conflict. The regulations around data protection don’t favor OpenAI’s position either, at least in their current form in the US, said Misshra. “Unlike data protection and AI regimes in Europe, the United States does not yet have a binding federal AI law that would clearly prohibit military or intelligence applications, which means OpenAI is operating more on self-imposed policy than statutory protection,” Misshra said. A silver lining in the cloud deployment There are some technical safeguards, OpenAI said. “This is a cloud-only deployment, with a safety stack that we run that includes these principles and others. We are not providing the DoW with ‘guardrails off’ or non-safety trained models, nor are we deploying our models on edge devices,” the company wrote in Saturday’s blog post. “Our deployment architecture will enable us to independently verify that these red lines are not crossed, including running and updating classifiers.” Bullock wrote that, given OpenAI’s emphasis on the technical and operational protections, “ It makes sense to base your opinion of their decision on how much you trust the company , their technical safeguards, and their in-the-loop personnel rather than on what the two paragraphs of their contract that they chose to publish say.” Despite the promise of architectural control Misshra said disagreements like this between AI firms and the US government may set a precedent not for resistance, but for how tech companies negotiate guardrails. “Expect future agreements to be more explicit on permitted use, audit rights, model versioning, and liability allocation. Ethical commitments will increasingly be embedded as contractual risk management tools rather than moral vetoes,” Misshra said. This article first appeared on CIO .