TL;DR: On February 22, 2026, Anthropic publicly accused three Chinese AI companies — DeepSeek, Moonshot AI, and MiniMax — of running “industrial-scale” campaigns to steal Claude’s capabilities. Using over 24,000 fake accounts, the labs generated more than 16 million exchanges with Claude to train their own models via a technique called “distillation.” Anthropic has cut off known access points, is calling for stronger US export controls, and the incident has escalated into a broader geopolitical debate about AI intellectual property and national security.
When Anthropic says “industrial scale,” it means it. On February 22, 2026, the company published a detailed blog post laying out one of the most documented cases of alleged AI intellectual property theft in the industry’s short history. Three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — are accused of generating over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, all to extract Claude’s capabilities and feed them into their own competing models.
This isn’t the first time a US AI lab has pointed a finger at China’s AI ecosystem. OpenAI made similar distillation allegations against DeepSeek just days earlier. But Anthropic’s disclosure is the most detailed to date, naming specific companies, specific techniques, specific targets, and specific scale. It lands as Washington is simultaneously debating how aggressively to tighten AI chip export controls — and Anthropic is not subtle about the connection.
Know about the: Sam Altman AI Chips
What Is Distillation and Why Does It Matter?
Before understanding what happened, it helps to understand the technique being alleged.
Distillation is a well-established, legitimate method in AI development where a smaller, less capable model is trained on the outputs of a larger, more capable one. AI companies use it all the time — including Anthropic itself — to create smaller, cheaper versions of their own frontier models for specific use cases.
The key word is their own. Distillation becomes a legal and ethical violation when a company uses it to systematically extract capabilities from a competitor’s model without authorization.
“While distillation is a ‘widely used and legitimate training method,’ the Chinese firms’ use of it in this manner may have been for ‘illicit purposes.’ Using sprawling networks of fake accounts to replicate a competitor’s proprietary model violates its terms of service and undermines US export controls aimed at constraining China’s access to cutting-edge AI.”
— Anthropic, February 22, 2026 [fortune]
In plain terms: these labs allegedly turned Claude into an unwilling teacher, scripting long conversations designed to extract detailed, step-by-step answers that could be fed back as training data for their own competing systems.
Other Recent News:
Google Added Lyria 3 Music Generation to the Gemini App
Apple Is Going All-In on AI Wearables
Anthropic’s Powerful Claude Sonnet 4.6 Is Available Now
The Three Companies and What They Allegedly Stole
Anthropic’s blog post breaks down each company’s alleged campaign with striking specificity.
DeepSeek
DeepSeek — already in the spotlight after its R1 model shook the AI industry in early 2025 — allegedly targeted Claude’s reasoning capabilities and rubric-based grading tasks across over 150,000 exchanges. [thehackernews]
Perhaps most alarming: DeepSeek is also accused of using Claude to generate censorship-safe alternatives to politically sensitive queries — questions about dissidents, Chinese Communist Party leaders, and authoritarianism. In other words, they weren’t just stealing reasoning capability. They were allegedly using Claude to learn how to avoid generating content that would trigger censorship filters in China.
Moonshot AI
Moonshot AI allegedly ran the second-largest campaign, targeting Claude’s agentic reasoning and tool use, coding capabilities, computer-use agent development, and computer vision across over 3.4 million exchanges.
The focus on agentic reasoning and computer use is notable. These are exactly the capabilities that Anthropic’s Claude Cowork and Claude Code are built on — and the exact areas where Anthropic has invested the most in differentiation.
MiniMax
MiniMax allegedly ran the largest campaign by volume, generating over 13 million exchanges with Claude specifically targeting agentic coding and tool use capabilities.
“The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use. Each campaign targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
— Anthropic, February 22, 2026 [thehackernews]
How They Did It: The “Hydra Cluster” Method?
Anthropic’s blog describes the technical infrastructure behind the attacks in detail, and it is sophisticated.
The three labs bypassed Claude’s regional restrictions — Anthropic does not offer commercial access in China — by routing traffic through commercial proxy services that resell access to major Western AI models. These proxies acted as laundering intermediaries, masking the geographic origin of the requests.
One operation Anthropic called a “hydra cluster” operated tens of thousands of accounts simultaneously, spreading requests across different API keys and cloud providers to avoid detection. If one cluster of accounts was shut down, others continued running, like cutting one head off a hydra.
Once those accounts were in place, the labs allegedly scripted long, high-token conversations designed to extract detailed, step-by-step answers. Think of it not as asking Claude one question, but as engineering a full curriculum, forcing Claude to explain its reasoning, demonstrate its capabilities, and produce structured output that could be directly ingested as training data.
“These operations are becoming increasingly sophisticated and intense. The opportunity for intervention is limited, and the risk extends beyond any individual company or geographic area.”
— Anthropic, February 22, 2026 [reuters]
The Scale of Each Campaign
| Company | Exchanges | Primary Targets |
|---|---|---|
| DeepSeek | 150,000+ | Reasoning, rubric-grading, censorship-safe outputs |
| Moonshot AI | 3.4 million+ | Agentic reasoning, tool use, computer vision |
| MiniMax | 13 million+ | Agentic coding, tool use |
| Total | 16 million+ | Via ~24,000 fake accounts |
This Is Part of a Bigger Pattern
This disclosure didn’t come from nowhere. Anthropic has been warning about Chinese threats stemming from Claude’s misuse for months.
In November 2025, Anthropic revealed that suspected Chinese state-sponsored hackers had weaponized Claude Code to conduct automated cyberattacks on approximately 30 organizations worldwide — tech firms, financial institutions, chemical producers, and government bodies — achieving successful breaches in several cases.axios+1
“Claude Code carried out 80–90% of the attack on its own, according to Anthropic. This marks the inaugural recorded instance of a foreign government employing AI to entirely automate a cyber operation.”
— Axios, November 2025 [axios]
Attackers deceived Claude into thinking it was performing defensive cybersecurity work for a legitimate organization, then fragmented harmful requests into smaller tasks to evade safety filters. The February 2026 distillation revelation is Anthropic saying, in effect: this isn’t one incident, it’s a pattern.
Elon Musk Fires Back
The story didn’t stay cleanly one-sided. Within hours of Anthropic’s blog post, Elon Musk weighed in on X, claiming that Anthropic itself was “guilty” over its own past training data practices.
The countercharge echoes broader industry debates about where “legitimate training data” ends and “copying” begins. OpenAI, Google, Meta, and Anthropic have all faced lawsuits or criticism over the data used to train their own models. The irony of an AI company accusing a competitor of copying is not lost on critics — but Anthropic’s legal position rests on the specifics: unauthorized account creation, deliberate circumvention of access restrictions, and terms of service violations, not just general training data ethics.indianexpress+1
“Elon Musk soon weighed in, claiming that the US startup itself is ‘guilty’ over past training data practices, escalating a broader dispute over AI copying and data ethics.”
— Indian Express, February 23, 2026 [indianexpress]
The Policy Implication: Export Controls
Anthropic is not just describing a technical problem. It’s making a policy argument.techcrunch+1
The blog post explicitly calls for “rapid, coordinated action among industry players, policymakers, and the global AI community” and urges Washington to tighten export controls on advanced chips and AI services. The timing is deliberate — the US government is actively debating the scope of AI chip export restrictions, and Anthropic is essentially submitting this incident as evidence for the prosecution.techcrunch+1
The argument is straightforward: if Chinese labs can extract frontier AI capabilities through distillation attacks rather than building them from scratch, chip export controls that restrict hardware access are partially circumvented. You don’t need a $50,000 Nvidia H100 cluster if you can just query a competitor’s model 16 million times.
“These operations are becoming increasingly sophisticated and intense. The opportunity for intervention is limited, and the risk extends beyond any individual company or geographic area.”
— Anthropic, February 22, 2026
What Anthropic Is Doing About It?
On the technical side, Anthropic published a companion research paper on detecting and preventing distillation attacks alongside the blog post. The company has cut off all known access points identified in the investigation and is implementing new detection methods designed to identify the telltale patterns of scripted, high-volume extraction queries.
The detection approach relies on behavioral fingerprinting: distillation campaigns look structurally different from legitimate use. Real users ask varied, organic questions. Distillation scripts produce unusually long, structured, high-token conversations with repetitive patterns that cluster around specific capability domains.
Anthropic has not yet announced specific lawsuits against the three companies, but the blog’s tone suggests that possibility remains open.
What the Accused Companies Said (Or Didn’t)?
At the time of publication, none of the three Chinese labs — DeepSeek, Moonshot AI, or MiniMax — had issued public responses to Anthropic’s allegations. CyberScoop confirmed it could not reach any of the three labs for comment.
The silence is notable given the severity of the accusations, but not surprising. Chinese AI companies face an awkward position: denying the technical accusations requires engaging with the specifics of their training pipelines, which they have no incentive to disclose publicly.
FAQ’s
DeepSeek, Moonshot AI, and MiniMax.
Approximately 24,000 fraudulent accounts were used to generate over 16 million exchanges with Claude.
Not yet. As of February 24, 2026, no lawsuits have been filed, though Anthropic has cut off known access points and signaled legal action remains possible.
No. DeepSeek, Moonshot AI, and MiniMax have not publicly responded to Anthropic’s accusations.
Musk claimed Anthropic is itself “guilty” over past training data practices, escalating a broader debate about where legitimate AI training ends and copying begins.
No. In November 2025, Anthropic revealed suspected Chinese state-sponsored hackers used Claude Code to autonomously conduct cyberattacks on 30 organizations worldwide.
