MathGPT has surpassed Microsoft's "ToRA 13B", model previously ranked #1 in benchmarks assessing mathematical aptitude
The makers of QANDA, the largest AI-driven learning platform in Asia, Mathpresso, have revealed that their huge language model, MathGPT, has beaten OpenAI and Microsoft models to set a new world record in maths.
It is believed that MathGPT has surpassed Microsoft's "ToRA 13B", the model that held the previous record, to be ranked #1 in benchmarks that assess mathematical aptitude, such as 'MATH' (12,500 challenging math questions) and "GSM8K" (8,500 elementary school arithmetic problems) according to Interesting Engineering.
In the MATH benchmark, OpenAI's GPT-4 was outperformed by MathGPT.
As part of a strategic cooperation with KT, Qanda and Upstage started developing MathGPT together in November of last year. Learning data from 10 million searches each day, including learning level, context, and interaction, were made available to Upstage by Qanda.
KT also gave Mathpresso a $8 million investment in September of last year to help with LLM growth.
Upstage refined the natural language-based language model to allow logical inference and trained this on its own specialised solution to prevent hallucinations.
Unlike domain-specific learning data, like expert knowledge, ChatGPT is trained using large amounts of textual data. As a result, it exhibits the phenomenon of hallucinations, in which it produces reactions that could plausibly transmit false information.
Finance ministry highlights importance of the energy efficiency programme in contributing to economic stability
KSE-100 Index settles at 133,370.14 points, up 1,421.08 points, or 1.08%
Goldman expects final 550,000bpd OPEC+ hike for September
"Different amounts of money, different amounts of tariffs," says President Donald Trump
Vietnam will accept US products with a zero percent tariff, says Trump
Regulatory duty on SUVs cut by 44%