AutoScientist lets AI models train themselves faster
Adaption AutoScientist system is designed to adapt models to particular applications
Adaption announced AutoScientist on Wednesday, an AI tool that fundamentally reimagines how models learn by automating the fine-tuning process and allowing models to improve themselves.
The system claims to have doubled win-rates across different models, representing a significant stride toward making frontier-level AI training accessible outside the labs of OpenAI, Anthropic, and Google.
AutoScientist represents a departure from conventional AI development. Rather than manually designing datasets and training routines, the system co-optimises both simultaneously, learning the most efficient approach for any given capability.
"What's super exciting about it is that it co-optimises both the data and the model and learns the best way to basically learn any capability," co-founder and CEO Sara Hooker told TechCrunch. "It suggests we can finally allow for successful frontier AI trainings outside of these labs."
Hooker, who previously served as VP of AI research at Cohere, built AutoScientist on Adaption's existing Adaptive Data offering. While Adaptive Data helps teams build high-quality datasets over time, AutoScientist converts those continuously improving datasets into continuously improving AI models.
For years, researchers assumed that building competitive AI models required the computational resources and talent of major technology companies. AutoScientist suggests that intelligent automation could level the playing field.
"Our view at Adaption is that the whole stack should be completely adaptable and should basically optimise on the fly to whatever task you have," Hooker explained. The philosophy extends beyond AutoScientist.
The doubled win rate claim requires scrutiny. Conventional benchmarks like SWE-Bench or ARC-AGI don't apply since AutoScientist is task-specific. The system is designed to adapt models to particular applications.
Adaptation acknowledges this limitation while remaining confident in results. "The same way that code generation unlocked a lot of tasks, this is going to unlock a lot of innovation at the frontier of different fields," Hooker said.
-
Inside Musk v Altman OpenAI trial: What you missed?
-
Anthropic, Gates Foundation collaborates to expand AI partnership in health education sector
-
OpenAI reviews antitrust action against Apple; Claims report
-
Anthropic overtakes OpenAI in business AI adoption
-
Tencent, Alibaba turn to local AI chips as Nvidia uncertainty grows
-
Microsoft faces UK antitrust probe over business software practices
-
Google unveils Googlebook: Here’s everything you need to know
-
Halupedia explained: Why AI Wikipedia clone is raising red flags
