China’s “AI Tigers” Accused of Copying US AI Models
American artificial intelligence company Anthropic has leveled serious accusations against three major Chinese AI laboratories, alleging they illegally extracted capabilities from its Claude model to accelerate their own development — and that doing so poses significant national security risks.
According to a blog post published by Anthropic, the Chinese firms — DeepSeek, MiniMax, and Moonshot AI — collectively created more than 24,000 fraudulent accounts and used them to generate over 16 million interactions with Claude. Those exchanges were then used to train the Chinese companies’ own models through a technique called distillation.
What Is Distillation and Why Does It Matter?
Distillation is a well-established practice within the AI industry, commonly used by companies to create leaner, more cost-effective versions of their own flagship models. However, most major proprietary AI providers, including Anthropic, explicitly prohibit third parties from using this method on their systems. Notably, Claude is not accessible to users in China, making the alleged account creation a deliberate workaround.
The accusations from Anthropic follow a similar set of claims made by OpenAI earlier in the month. In a memo addressed to the US House Select Committee on China, OpenAI alleged that DeepSeek and other Chinese AI firms had been improperly distilling its ChatGPT models over the course of the past year. OpenAI characterized DeepSeek’s rapid progress as being built on a foundation of “free-riding” off capabilities developed by American frontier AI labs.
DeepSeek made global headlines last year after unveiling a powerful AI model that appeared to rival the performance of leading Western systems while requiring significantly fewer computational resources. The release sent shockwaves through the industry, calling into question whether US export controls on advanced semiconductors were having their intended effect.
The Case for Keeping Export Controls in Place
Anthropic, however, argues the opposite conclusion should be drawn. Rather than suggesting export controls have failed, the company contends that the very fact Chinese labs resorted to distillation — rather than developing cutting-edge models purely through independent innovation — actually validates the logic behind those restrictions. The company has long publicly supported export control policies aimed at maintaining American leadership in AI development.
“In reality, these advancements depend in significant part on capabilities extracted from American models, and executing this extraction at scale requires access to advanced chips,” Anthropic stated.
Security Risks and a Narrow Window to Act
Beyond intellectual property concerns, Anthropic raised alarms about the potential dangers of models built through illicit distillation. The company warned that such models may lack the safety guardrails that American AI developers incorporate into their systems. Without those protections, the models could potentially be exploited to facilitate cyberattacks, assist in the development of biological weapons, or enable authoritarian governments to conduct mass surveillance, run disinformation campaigns, or launch offensive cyber operations.
“The window to act is narrow,” Anthropic cautioned.
DeepSeek, MiniMax, and Moonshot AI — whose model Kimi has gained considerable traction in China — have collectively earned the nickname “AI tigers” due to their rapid ascent. All three currently appear among the top 15 models on the Artificial Analysis leaderboard, a widely referenced ranking in the industry.
Neither DeepSeek nor the other named companies have publicly responded to Anthropic’s allegations, and DeepSeek has similarly stayed silent on OpenAI’s earlier claims. Anthropic said it is continuing to monitor for further unauthorized use of its technology.

