Spike in ChatGPT uninstalls after OpenAI took the deal
#1
Claude hit the top of the U.S. App Store for the first time
~150
Retired judges backing Anthropic in court
2.5M+
People pledged to cancel ChatGPT via QuitGPT
The two redlines
What Anthropic refused to drop
1
No autonomous weapons
Anthropic insists Claude must not power weapons systems that select and engage targets without human oversight. Amodei argues current AI is not reliable enough for lethal autonomous decisions.
2
No mass domestic surveillance
Anthropic refuses to allow Claude to be used for bulk surveillance of American citizens. Amodei argues existing law has not caught up with AI’s ability to supercharge data collection at scale.
The Pentagon says it has no interest in either use case. Its objection is to the principle: no private company should constrain how the military uses contracted technology. The Pentagon demands “any lawful purpose” language with no vendor-imposed limits.
How it escalated
Timeline of the crisis
Jul 2025
Anthropic wins $200M contract
Claude becomes the first AI model deployed on the Pentagon’s classified networks via Palantir. Defense officials call it “the most advanced model for sensitive military applications.”
Jan 2026
Hegseth demands “any lawful use”
The Defense Secretary’s AI memo requires all DoD contracts to adopt unrestricted usage language. Tensions escalate after Anthropic inquires about Claude’s role in the Maduro capture.
Feb 24
Face-to-face ultimatum
Hegseth meets Amodei at the Pentagon. Deadline: 5:01 PM Friday. Accept unrestricted terms or face contract termination, supply chain risk designation, and possible Defense Production Act invocation.
Feb 26
Amodei publishes his stand
“We cannot in good conscience accede to their request.” Blog post reaffirms both redlines. Pentagon calls Amodei a “liar” with a “God-complex.” Anthropic employees rally behind leadership.
Feb 27
The hammer falls
Trump orders all agencies to cease using Anthropic. Hegseth designates it a supply chain risk. OpenAI signs a classified deal with the Pentagon hours later. ChatGPT uninstalls spike 295%. Claude hits #1 on the App Store.
Mar 9
Anthropic files lawsuit
Sues in California federal court. Claims First Amendment violation and that the supply chain risk label exceeds congressional authority. Seeks preliminary injunction.
Mar 13–17
Coalition rallies behind Anthropic
~150 retired judges, tech industry groups, Microsoft, former national security officials, and researchers from OpenAI and Google DeepMind all file amicus briefs supporting Anthropic.
Mar 18
DOJ fires back
Files brief urging court to reject injunction. Argues Anthropic’s refusal is “conduct, not speech.” Warns Anthropic could disable AI mid-warfighting. Pentagon confirms replacement LLMs are in development.
Mar 25
Injunction hearing (upcoming)
Court will decide whether to pause the supply chain risk designation. The ruling will set the trajectory for the entire case.
The two sides
Anthropic vs. the Pentagon
Anthropic’s position
AI is not yet reliable enough for autonomous weapons or mass surveillance
Current law hasn’t caught up with AI’s surveillance capabilities
Not trying to force government to use Claude — asking not to be punished for having terms
Supply chain risk label is meant for foreign adversaries, not domestic disagreements
These two redlines have never affected a single government mission
vs
Pentagon’s position
No private company should dictate how the military uses contracted technology
Anthropic could theoretically disable AI or alter behavior mid-operation
Vendor restrictions create unacceptable dependency risk for national security
Anthropic’s refusal is conduct (commercial decision), not protected speech
Pentagon says it has no interest in the use cases Anthropic objects to anyway
Public opinion
Consumer and industry reaction
ChatGPT uninstalls
+295% spike
ChatGPT 1-star reviews
+775% surge
Claude downloads
+51% day-over-day
Claude App Store
#1 in U.S. (first time ever)
QuitGPT pledges
2.5 million+ users
Support coalition
Who’s backing Anthropic
~150 retired judges
Bipartisan group says Pentagon misinterpreted the statute and violated proper procedures
Tech industry groups
Groups representing hundreds of Pentagon contractors warn of chilling effect on innovation
Microsoft
Says Claude products can remain available through its platforms for non-DoD customers
OpenAI + Google staff
Dozens of researchers from rival labs filed personal-capacity brief supporting Anthropic
Former nat’l security officials
Senior former defense and intelligence officials weigh in on behalf of Anthropic
Bipartisan lawmakers
Senate bill introduced to limit Pentagon AI use; both parties express precedent concerns
Why it matters
Four things this changes
1
AI is now a values-based consumer choice. The QuitGPT revolt proved users will reward companies that hold ethical lines, even at commercial cost. Model selection now carries brand risk.
2
There is no legal framework for military AI governance. Both sides are improvising. The Pentagon uses foreign-adversary tools; Anthropic writes terms of service for risks no statute covers.
3
Rapid government AI adoption created fragile dependencies. Agencies got directives via social media with no formal transition plan. Claude is still being used in Iran operations weeks after the ban.
4
Anthropic’s commercial position got stronger, not weaker. The company gained market share by declining business. Safety commitments became the most powerful brand differentiator in AI.