Anthropic has accused a number of Chinese artificial intelligence developers, such as DeepSeek, Moonshot, and MiniMax, of employing large-scale distillation methods to enhance their models by utilizing the capabilities of its AI known as Claude.
According to information from Anthropic, this process involved 24,000 fake accounts through which 16 million requests were made. Distillation is a machine learning technique where a less powerful model is trained on the outputs of a more powerful model.
Although this method is considered legitimate, Anthropic claims that its use by Chinese companies violates U.S. export restrictions and licensing agreements.
Representatives of Anthropic warned: "Foreign labs that improperly use American models may eliminate protective mechanisms, transferring their capabilities into military and intelligence systems."
Previously, similar accusations against DeepSeek were made by other American companies, including OpenAI, but Anthropic provided more specific evidence.
Methods of Operation
According to Anthropic, these companies used a network of thousands of fake accounts that formed "hydra clusters" to distribute traffic through the company's API and third-party cloud platforms.
The requests exhibited high frequency, narrow focus on specific functions, and significant repeatability, indicating model training rather than actions of ordinary users. For example, DeepSeek sent over 150,000 requests aimed at logical reasoning tasks and "safe" rewrites of politically sensitive queries.
Moonshot, the developer of the Kimi model, made over 3.4 million requests focused on agent-based thinking, programming, and computer vision. In turn, MiniMax conducted the largest campaign with over 13 million requests aimed at agent programming. After the release of the updated version of Claude, the company redirected nearly half of the traffic within 24 hours to take advantage of the new capabilities.
Response Measures
In response to these incidents, Anthropic announced plans to enhance protections to prevent similar attacks. The company is implementing classifiers and behavior analysis systems to help identify patterns in API traffic, as well as sharing technical indicators with other AI labs and tightening the account verification process.
Additionally, protective mechanisms are being developed at both the product and model levels to reduce the possibility of using its outputs in illegal training, while not degrading the experience for legitimate users.