Why 1.5M Users Canceled ChatGPT: The ‘QuitGPT’ Movement and the Rise of Ethical AI
【この記事にはPRを含む場合があります】

Key Takeaways
- The “QuitGPT” Exodus: In late February 2026, 1.5 million users canceled their paid ChatGPT subscriptions within 48 hours after OpenAI signed a contract to deploy AI models on the Pentagon’s classified networks.
- Anthropic’s Ethical Stand: Anthropic CEO Dario Amodei explicitly refused the Pentagon’s demands for mass surveillance and autonomous weapons, resulting in the company being excluded from government systems and labeled a “supply chain risk”.
- The Rise of Claude: Following the controversy, Anthropic’s Claude surpassed ChatGPT to become the #1 app on the US App Store, doubling its paid subscriber base.
- A Broader Industry Shift: Major tech giants, including Google (Gemini), Meta, and xAI, have quietly removed military use bans to pursue lucrative national defense contracts.
What is the QuitGPT Movement? Why Are Users Leaving ChatGPT?

A massive wave of cancellations, dubbed the “QuitGPT” movement, is currently shaking the AI industry. But why are users abandoning one of the most popular tech tools in history?
The primary catalyst for the QuitGPT movement is OpenAI’s controversial agreement to allow the US Department of Defense (the Pentagon) to use its AI models for military and surveillance purposes.
In late February 2026, news broke that OpenAI had signed a contract to integrate its AI into the Pentagon’s classified networks. The public backlash was immediate and unprecedented: within just 48 hours, 1.5 million users canceled their paid ChatGPT subscriptions, and the app’s daily uninstall rate skyrocketed by 295%.
Anthropic’s Refusal vs. OpenAI’s Immediate Signing
The controversy is heavily contrasted by the actions of another leading AI lab, Anthropic. When the Pentagon requested the use of AI for “mass surveillance of American citizens” and “autonomous weapons systems,” Anthropic CEO Dario Amodei refused on ethical grounds. Consequently, Anthropic was banned from government systems and unprecedentedly designated as a “supply chain risk” to national security.
On the exact same night that Anthropic was excluded for maintaining its ethical boundaries, OpenAI CEO Sam Altman immediately signed the Pentagon contract. This move fundamentally contradicted OpenAI’s previous safety policies; the company had quietly deleted clauses banning the use of its AI for military and warfare purposes two years prior. By bending its own safety standards for lucrative government deals, OpenAI sparked severe distrust among users who prioritize corporate transparency and governance.
The “Lawful Purpose” Loophole and Surveillance Concerns

How is OpenAI addressing the massive backlash? While OpenAI claims that its safety principles remain intact and that the technology will only be used for “all lawful purposes,” experts and users are highly skeptical of legal loopholes.
Sam Altman took to social media to manage the crisis, stating that prohibitions against mass surveillance and autonomous weapons are embedded in the contract. However, under existing US frameworks, such as Executive Order 12333 from 1981, collecting the data of American citizens without a court warrant can be interpreted as “lawful” if the communications are routed through foreign servers.
Because OpenAI has essentially delegated the interpretation of “lawful” to the government, critics argue that the company has opened the backdoor to mass surveillance. Currently, OpenAI shows no intention of withdrawing from the contract, stating only that they will implement technical safeguards.
ChatGPT vs. Anthropic Claude: Which AI Should You Choose?

In the wake of the QuitGPT movement, users looking for a secure and ethical alternative have flocked to Anthropic’s Claude. Following OpenAI’s Pentagon deal, the Claude app dethroned ChatGPT to claim the #1 spot on the US App Store. Anthropic saw its free user base jump by 60%, while its paid subscriber count doubled.
Beyond corporate ethics, Claude is highly regarded for its enterprise-grade performance:
- Massive Context Window: Claude’s latest models can process up to 750,000 words at once—the equivalent of several novels—allowing it to extract precise details without losing context, a common flaw in ChatGPT.
- Advanced File Processing: Features like “Claude co-work” enable users to automate repetitive document tasks and process large batches of files, making it a powerful tool for business environments.
Side-by-Side Comparison: OpenAI vs. Anthropic
| Feature / Stance | OpenAI (ChatGPT) | Anthropic (Claude) |
|---|---|---|
| Military & Surveillance Use | Contracted with the Pentagon (Allowed for “lawful purposes”) | Strictly refuses use for mass surveillance and autonomous weapons |
| Corporate Priority | Performance, market share, and practical utility | Safety, ethics, and transparency |
| Government Status | Deployed on classified defense networks | Designated a “supply chain risk” / Banned from use |
| Core Strengths | General-purpose dialogue, fast execution of broad tasks | Deep context understanding, complex data analysis of massive documents |
Today, choosing an AI is no longer just about which model is “smarter.” It’s about aligning with the underlying philosophy of how AI should be integrated into society.
> Google Play “Claude by Anthropic” app download page is here (for Android)
> App Store “Claude by Anthropic” app download page is here (for iPhone)
> Download the “Claude” app for Windows and macOS here
Where Do Google Gemini, Meta, and xAI Stand on Military AI?

As the divide between OpenAI and Anthropic deepens, other Big Tech companies are quietly shifting their allegiances toward national security and military cooperation.
In 2018, Google faced intense employee protests over “Project Maven,” a military AI initiative, which forced the company to pull out of defense contracts. Today, however, Google has quietly removed military-use bans from its AI guidelines and is actively pushing to support the national security of democratic governments using Google Gemini.
Other major players are following suit:
- Meta: Actively pursuing military contracts with the Department of Defense.
- xAI (Elon Musk): Has accepted Pentagon conditions and is integrating its AI into national security networks.
With massive funding available in the defense sector, Anthropic remains a rare outlier in the AI industry by drawing a hard red line against military profitability.
Next Steps for Users in the AI Era
The criteria for selecting AI tools are rapidly shifting from mere “convenience” to “corporate ethics and transparency”. As AI continues to optimize our daily lives, understanding how your data is handled and the philosophy of the companies behind the tech is essential.
Here are three actionable steps you can take today:
- Review Terms and Policies: Check the privacy policies and data handling practices of the AI tools you use daily.
- Test Alternatives: Don’t default to ChatGPT. Try models like Claude or Gemini to experience differences in output accuracy and specialized features firsthand.
- Vote with Your Wallet: Look beyond performance. Ask yourself, “Do I agree with this company’s ethics?” Choose to pay your monthly subscription to companies that align with your personal values.
As technology evolves, the responsibility falls on individual users to critically evaluate information and proactively choose the AI tools that shape our future.








