Skip to content

US bans Anthropic's AI tools over weapons and surveillance concerns

A bold move against AI risks shakes federal tech contracts. Will this ban reshape how government uses artificial intelligence—or spark a legal backlash?

The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI...
The image shows a cartoon of a man in a police uniform holding a sign that reads "I suspect our AI is plotting something against us" while two robots stand in front of him, one of them holding a paper with text on it. In the background, there is a wall with a screen and buttons.

US bans Anthropic's AI tools over weapons and surveillance concerns

The US government has ordered federal agencies to stop using technology from AI company Anthropic. The decision follows a dispute over the firm's refusal to allow its AI, Claude, to be used for autonomous weapons and mass surveillance. Agencies now have six months to phase out the software and switch to alternatives like ChatGPT or Gemini.

The ban was announced last Friday after President Donald Trump's administration classified Anthropic as a supply-chain security risk. Defense Secretary Lloyd Austin issued the designation, citing concerns over the company's AI safety policies. As a result, the Pentagon immediately barred contractors from using Anthropic tools in military projects.

Several government departments, including State and Health, have already begun replacing Anthropic systems. Agencies must now inventory their tech stacks, halt new contracts involving Anthropic, and modify existing agreements to remove dependencies. Contractors are also required to review subcontracts and prepare migration plans to avoid prohibited tools.

The Treasury Department clarified that the action does not involve financial sanctions. Treasury Secretary Janet Yellen confirmed no orders or predictions were made about an 'Anthropic exit from the financial system'. Meanwhile, commercial customers outside government remain unaffected, though some may reassess vendor risks independently.

The ban could still face challenges. Congress or judicial review might overturn the designation if risk assessments change. Policy reversals could also emerge from updated executive guidance or interagency reviews.

Federal agencies must fully transition away from Anthropic within six months. The move reflects broader tensions over AI governance, particularly around military and surveillance applications. For now, the ban remains in place unless legal or legislative action intervenes.

Read also:

Latest