Anthropic's $200M Pentagon Contract at Risk Over Objections to Domestic Surveillance, Autonomous Deployments (reuters.com) 23
Talks "are at a standstill" for Anthropic's potential $200 million contract with America's Defense Department, reports Reuters (citing several people familiar with the discussions.") The two issues?
- Using AI to surveil Americans
- Safeguards against deploying AI autonomously
The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported... Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work..."
In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries.
A person "familiar with the matter" told the Wall Street Journal this could lead to the cancellation of Anthropic's contract: Tensions with the administration began almost immediately after it was awarded, in part because Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. Anthropic's focus on safe applications of AI — and its objection to having its technology used in autonomous lethal operations — have continued to cause problems, they said.
Amodei's essay calls for "courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety..."
- Using AI to surveil Americans
- Safeguards against deploying AI autonomously
The company's position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported... Anthropic said its AI is "extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work..."
In an essay on his personal blog, Anthropic CEO Dario Amodei warned this week that AI should support national defense "in all ways except those which would make us more like our autocratic adversaries.
A person "familiar with the matter" told the Wall Street Journal this could lead to the cancellation of Anthropic's contract: Tensions with the administration began almost immediately after it was awarded, in part because Anthropic's terms and conditions dictate that Claude can't be used for any actions related to domestic surveillance. That limits how many law-enforcement agencies such as Immigration and Customs Enforcement and the Federal Bureau of Investigation could deploy it, people familiar with the matter said. Anthropic's focus on safe applications of AI — and its objection to having its technology used in autonomous lethal operations — have continued to cause problems, they said.
Amodei's essay calls for "courage, for enough people to buck the prevailing trends and stand on principle, even in the face of threats to their economic interests and personal safety..."