Google DeepMind’s Gemini AI wins gold medal at International Math Olympiad

New Delhi: Google DeepMind has made a historic breakthrough in artificial intelligence with its Gemini AI winning a gold medal-level performance at the International Mathematical Olympiad (IMO), the most challenging math contest in the world among high school students. The advanced model was the first to achieve an official gold rating by the organisers of the competition, getting 35 out of the maximum 42 points by solving five of the six complex problems.

The outcome is an enormous breakthrough in AI reasoning and makes Google a leader in the competitive tech giant race. Unlike previously existing systems which involved translating problems to code manually, Gemini Deep Think attempted to answer questions written in natural language and produced formal proofs all within the 4.5-hour time constraint imposed by the IMO.

New parallel thinking method behind

A fresh version of Gemini, dubbed Deep Think, ran this year and uses a new reasoning technique called parallel thinking. Deep Think, however, does not offer a single logical path; instead, it considers numerous possible solutions and settles on an answer after that. This strategy enabled the model to generate explicit, formidable proofs, which were awe-inspiring to competition graders.

The success is a significant step up from the 2024 silver-medal attempt by Google, which used the combined AlphaProof and AlphaGeometry system to solve only four problems and required the assistance of a human expert to understand natural language.

DeepMind follows rules, wins praise

The cautious nature of the DeepMind release of its findings received positive responses in the AI community, particularly in contrast with OpenAI, whose early announcement of mathematical performance in its own model raised eyebrows. The lack of transparency led to backlash Rendell’s backlash against OpenAI using unofficial grading panels to bypass the official competition rules.

Demis Hassabis, Google DeepMind CEO, stressed that they wanted to be fair and therefore have waited until the IMO Board had completed its work, confirming all of the entries and recognising human students. This was a respectful and responsible way that many applauded.

A new era of AI problem solving

Researchers claim that the gold medal win represents a new dawn in AI problem-solving, namely not memorisation or a formula application. Gemini was taught on pre-curated high-quality maths data and trained in reinforcement learning to solve open-ended problems based on language understanding and abstract logic.

Analysts are terming the success as a preview of emergent cognition. In another, Gemini found a solution to a problem in complex number theory more elegantly than many human entrants, with elementary methods. The outcome shows that general-purpose AI models are finally capable of solving some of the most intellectually challenging tasks in mathematics, which was previously unimaginable.