Anthropic CEO says it 'cannot in good conscience accede' to Pentagon's demands for AI use

WASHINGTON — Anthropic CEO Dario Amodei said Thursday that the artificial intelligence company “cannot in good conscience accede” to the Pentagon’s demands to allow unrestricted use of its technology, deepening a public clash with the Trump administration that is threatening to pull its contract and take other drastic steps by Friday. The maker of the AI chatbot Claude said in a statement that it’s not walking away from negotiations but that new contract language received from the Defense Department “made virtually no progress on preventing Claude’s use for mass surveillance of Americans or in fully autonomous weapons.” Sean Parnell, the Pentagon’s top spokesman, said earlier on social media that the military “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” Anthropic’s policies prevent its models from being used for those purposes. It’s the last of its peers — the Pentagon also has contracts with Google, OpenAI and Elon Musk