Skip to content

Anthropic's Pentagon deal collapse reshapes trust in ethical AI

One CEO's stand against military AI sent shockwaves through the industry. Now users are abandoning ChatGPT for Claude—but is ethics the real winner?

The image shows a colorful design on the right side with the words "AI, Apps, IoT" written on it...
The image shows a colorful design on the right side with the words "AI, Apps, IoT" written on it against a white background.

Anthropic's Pentagon deal collapse reshapes trust in ethical AI

A failed deal between Anthropic and the Pentagon has triggered a shift in public trust towards AI companies. The breakdown came after Anthropic's CEO refused to allow military use of its AI for autonomous weapons and mass surveillance. Meanwhile, OpenAI faced criticism for capitalising on the situation—leading to a surge in uninstalls of its ChatGPT app.

Anthropic's negotiations with the Pentagon collapsed when CEO Dario Amodei insisted on strict limits. The company blocked applications of its AI in autonomous weapons and large-scale domestic surveillance. This stance, though principled, ended the potential partnership.

Shortly after, OpenAI moved to benefit from the fallout. CEO Sam Altman later admitted the company had handled the situation poorly. Many users saw the move as opportunistic, resulting in a 295% spike in ChatGPT uninstalls the following day.

In contrast, Anthropic's AI model, Claude, experienced a sharp rise in downloads. The incident highlighted differing approaches to AI ethics between the two firms.

The failed Pentagon deal exposed tensions between AI development and ethical boundaries. OpenAI's response led to a clear drop in user trust, reflected in app removals. Anthropic, however, gained traction as users sought alternatives with stricter ethical policies.

Read also:

Latest