In recent weeks, AI giant Anthropic has been locked in a high‑stakes confrontation with the Trump administration’s Department of Defense (DoD) over new standard terms the Pentagon wants to impose on AI vendors. Defense Secretary Pete Hegseth had demanded contract language that would give the military “any lawful use” of Anthropic’s models, effectively stripping out the company’s long‑standing limits on certain battlefield and domestic applications. Lawful, in Hegseth’s mind, means the DoD could do practically whatever it wanted, up to and including domestic mass surveillance and AI-controlled weapons . If that sounds like the premise for how a war between Terminators and humans might begin, you’re not the only one to think so. Caution, however, is not a word Hegseth seems to know. But Anthropic CEO Dario Amodei, however, is well aware of the real-world risks of AI —and not just the ones torn from science-fiction horror movies. Be that as it may, Hegseth summoned Amodei and demanded that Anthropic AI be used any way he wants or said he’d cancel the company’s existing $200 million contract and blacklist them from any further AI pacts. Hegseth gave Anthropic until 5 p.m. yesterday to bend the knee. Amodel didn’t bend. He publicly stated the company would rather walk away from work with the DoD than drop contractual safeguards meant to keep its AI from being used for mass surveillance of Americans or for fully autonomous weapons. It’s not that he objects to using AI to defend the US. Amodei favors that. But, “using these systems for mass domestic surveillance is incompatible with democratic values,” he said. “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.” In addition, “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of [Defense] on R&D to improve the reliability of these systems, but they have not accepted this offer.” Oh, and by the way, Amodei said those use cases “have never been included in our contracts with the Department of [Defense], and we believe they should not be included now.” The Pentagon kept the pressure on, describing the strong-arm tactics as “my way or the highway” and told Anthropic to pitch their “final offer” yesterday, Still, Anthropic rejected the DoD proposal, saying it “cannot, in good conscience,” agree to these overbroad terms. It’s not, by the way, that Anthropic is some woke, liberal company as it’s now being painted in some pro-Trump circles. Far from it! As the National Review pointed out, “ Amodei is just about the opposite of a dove when it comes to military applications of AI. For example, Anthropic’s Claude was used by the Trump administration to capture former Venezuelan President Nicolás Maduro in January. Anthropic’s stance against using AI for domestic surveillance and self-guided weapon systems is less about political ideology and more about a rational realization of the dangers of trusting early-stage, unfettered AI. Civil liberties groups, including the Electronic Frontier Foundation (EFF), have urged Anthropic to hold the line. They’re casting the Pentagon’s push as an attempt to bully tech firms into building tools for bulk spying and automated warfare. Within Anthropic, employees have posted public messages backing leadership’s stance. They describe the showdown as a visible test of the company’s founding commitment to steer frontier AI away from the most destabilizing military uses. These workers are not alone in supporting Anthropic’s stance. Alphabet, Amazon, and Microsoft employees announced they were behind Anthropic . Simultaneously, hundreds of Google and OpenAI employees signed an open letter calling on their companies to maintain Anthropic’s red lines against mass surveillance and fully automated weaponry. They said they “hope our leaders will stand together” to reject the current Pentagon terms. Donald Trump, on the other hand, late yesterday threw a fit . “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.“ Government agencies now have six months to transition to alternative tools . Some people on the right politically were behind Anthropic’s position. Retired General Jack Shanahan, for instance, who was in the middle of an earlier military-vs.-AI conflict between Project Maven and Google , did not take Trump’s side. He wrote: “Despite the hype, frontier models are not ready for prime time in national security setting s. Over-reliance on them at this stage is a recipe for catastrophe. Mass surveillance of US citizens? No thanks.” None of this stopped other AI companies from flirting with the Defense Department. In an internal memo, OpenAI CEO Sam Altman wrote: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.” He went on to say OpenAI was still open to making “a deal with the DoW that allows our models to be deployed in classified environments.” That sounded like classic waffling to me, and sure enough last night, OpenAI agreed to work with Defense Department . Let’s face it OpenAI has a bottomless need for revenue to cover its endless capital expenses, so the execs were willing to make a deal with the devil. (Yeah, yeah, I know Altman talked about guardrails and protections. One word for you: hallucinations.) Sadly, if OpenAI hadn’t made that deal, someone else surely would have. So, if in 2028, AI-driven autonomous drones drop bombs on suspected illegal foreigners’ homes in Minneapolis or anywhere else in the world, we’ll know who to blame — much good that will do us then. This insane adoption of out-of-control AI for military purposes must be stopped now lest the Terminator wars become fact rather than science fiction.