Washington DC, Feb. 24 -- Anthropic AI claims that it has identified "industrial-scale distillation attacks" on its own reasoning models by DeepSeek, Moonshot AI and MiniMax.
In a post on X, Anthropic shared, "We've identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models."
"Distillation can be legitimate: AI labs use it to create smaller, cheaper models for their customers. But foreign labs that illicitly distill American models can remove safeguards, feeding model capabilities into their own military, intelligence, and surv...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.