Google released a report on Thursday warning of an increase in “distillation attacks” targeting its Gemini AI model to steal its foundational technology. The company revealed that adversaries used over 100,000 AI-generated prompts to systematically probe the model- a technique known as “model extraction.” This surge in state-positioned digital espionage has been traced to actors in countries including China, Russia, and North Korea.
The recent attacks involve using legitimate access to flood a mature machine learning model with queries. Adversaries are effectively working to extract enough data to clone the model’s features and logic into new. Separate models. Ongoing reports further clarify that while this activity does not pose a direct threat to daily users, it presents a substantial danger to service providers and developers by infringing upon their intellectual property.
In this connection, John Hultquist, chief analyst for the Google Threat Intelligence Group said: “We are going to be the canary in the coal mine for more incidents, suggesting that Google may be the first target while other major AI developers will inevitably face similar theft attempts.”
The disclosure comes in wake of intensifying global competition in artificial intelligence. While Chinese companies like ByteDance are advancing video generation tools, such as the recently launched Seedance 2.0. Google’s report warned that “distillation attacks” are becoming a prime concern. The revelation that over 100,000 prompts were used to clone model capabilities has led to a heightened focus on protecting intellectual property.