Anthropic has accused three Chinese AI firms, DeepSeek, Moonshot AI, and MiniMax, of extracting knowledge from its Claude model at a massive scale.
The company claims these firms created more than 24,000 fake accounts. They then used those accounts to generate over 16 million interactions with Claude.
According to Anthropic, they used this activity to improve their own AI systems through a technique known as distillation.
Notably, the allegations arrive at a time when policymakers continue to debate U.S. AI chip exports to China.
Alleged Distillation
Anthropic states that the activity targeted its most advanced capabilities – agentic reasoning, tool use, and coding.
These features define how modern AI systems perform complex tasks. Therefore, extracting them offers significant competitive value.
Distillation itself is a standard method. Companies often use it to build smaller and more efficient models.
However, using it on a competitor’s system without authorization is a major concern.
Earlier this month, OpenAI submitted a memo to U.S. lawmakers that also accused DeepSeek of using distillation to replicate aspects of its models.
This could imply widespread activity across the industry.
Scale Of Activity
Anthropic provided details; it had tracked more than 150,000 exchanges linked to DeepSeek.Â
Interactions focused on foundational reasoning and alignment. In particular, they explored censorship-safe alternatives to sensitive or policy-restricted queries.
Moonshot AI generated over 3.4 million exchanges about agentic reasoning, tool use, coding and data analysis, computer-use agent development, and computer vision.
In addition, the company recently released an open-source model, Kimi K2.5, along with a coding agent. This aligns with the capabilities Anthropic claims were targeted.
MiniMax accounted for the largest share, with around 13 million exchanges, which centered on agentic coding, tool use, and orchestration.
Anthropic reports that MiniMax redirected nearly half of its traffic to Claude during a new model release. This allowed it to extract updated capabilities in real time.
DeepSeek’s Rise

The allegations also connect to DeepSeek’s recent progress. About a year ago, the company gained global attention with its R1 reasoning model.
That model delivered performance close to leading U.S. systems. However, it did so at a much lower cost.
Now, DeepSeek is expected to release its next model, V4. Reports suggest it may outperform both Claude and ChatGPT in coding tasks.
Export Controls

Recently, the U.S. government allowed companies such as Nvidia to export advanced AI chips, including the H200, to China.
However, critics argue that loosening these restrictions could accelerate China’s AI development.
Anthropic supports this concern. The company states that the scale of the alleged distillation activity would require access to advanced computing hardware.
In its view, limiting chip access could reduce both direct model training and large-scale extraction efforts.
Expert Opinion
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator and co-founder of CrowdStrike, stated that such activity is not surprising.
He noted that rapid progress in Chinese AI models may partly result from the distillation of U.S. systems.
He also argued that these developments strengthen the case for stricter export controls on AI chips.
Safety Risks
Anthropic emphasized that its models include safeguards designed to prevent harmful uses.
Its safeguards block information on developing bioweapons and conducting malicious cyber operations.
However, models created through distillation may not retain these protections. As a result, risks could increase.
Anthropic also pointed to geopolitical concerns that could help authoritarian governments to deploy advanced AI for offensive cyber operations and disinformation campaigns.
The technology can also be used for mass surveillance. If such models are open-sourced, these risks could increase significantly.

