Skip to main content

Recently, AI models developed by OpenAI and Google DeepMind achieved top scores in the 2025 International Math Olympiad (IMO), a prestigious high school-level math competition. This achievement was announced independently by the two companies in recent days.

The outcome highlights the rapid advancement of AI systems and the intense competition between Google and OpenAI in the AI arena. The race for public perception of being ahead in AI development is fierce, with significant implications for attracting top AI talent. Many AI researchers have backgrounds in competitive math, making benchmarks like the IMO particularly meaningful.

Last year, Google secured a silver medal at the IMO using a formal system that required human translation of problems into a machine-readable format. This year, both OpenAI and Google entered informal systems that could process questions and generate proof-based answers in natural language without human intervention. Both companies claim their models outperformed most high school students and Google’s previous AI model, achieving this feat without needing human-machine translation.

Researchers from OpenAI and Google, in interviews with TechCrunch, stated that these gold medal performances represent significant breakthroughs in AI reasoning models, particularly in non-verifiable domains. While AI reasoning models excel in tasks with straightforward answers, such as math or coding, they struggle with more ambiguous tasks. The recent achievements suggest progress in addressing this limitation.

However, Google has raised questions about the manner in which OpenAI conducted and announced its gold medal IMO performance. Following OpenAI’s announcement on Saturday morning, Google DeepMind’s CEO and researchers criticized OpenAI on social media for announcing its achievement prematurely and for not having its model’s test officially evaluated by the IMO.

Thang Luong, a senior researcher at Google DeepMind and lead for the IMO project, explained to TechCrunch that Google delayed its announcement to respect the participating students, waiting for the IMO president’s blessing and official grading before announcing its results on Monday morning.

TechCrunch Event

San Francisco
|
October 27-29, 2025

Luong emphasized the importance of following the IMO’s grading guidelines for any evaluation of gold medal performance. Noam Brown, a senior OpenAI researcher involved in the IMO model, stated that OpenAI hired third-party evaluators who were former IMO medalists to assess its model’s performance, after which the company was instructed by the IMO to delay its announcement until after the official award ceremony on Friday night.

The IMO did not respond to TechCrunch’s request for comment on the matter. While Google’s point about the official process is valid, the debate might overshadow the broader significance of AI models from leading labs rapidly improving. The competition saw students from around the world participating, with only a small percentage achieving scores as high as those of OpenAI and Google’s AI models.

The AI race appears more closely matched than either company might admit, with OpenAI previously holding a significant lead. The upcoming release of GPT-5 by OpenAI is anticipated to influence the perception of its position in the AI industry.




Source Link