Google DeepMind CEO Demis Hassabis has flagged that while artificial intelligence can solve complex mathematical problems, it still fails at finding a solution to elementary school questions, suggesting that the technology is inconsistent with its problem-solving skill.
Speaking at the Google for Developers podcast, Hassabis said, “It shouldn’t be that easy for the average person to just find a trivial flaw in the system,” adding that on the one hand Google’s Gemini models equipped with DeepThink is capable to win gold medals at the International Mathematical Olympiad, it “still makes simple mistakes in high school maths.”
What’s ‘jagged intelligence’?
Hassabis characterised today’s AI as possessing “uneven” or “jagged” intelligences, remarkably strong in some areas yet surprisingly weak in others. His description aligns with Google CEO Sundar Pichai’s recently introduced term “AJI” (artificial jagged intelligence), used to describe systems with inconsistent capabilities.
The DeepMind CEO stressed that improving AI’s capability in a way to make it less inconsistent requires more than just scaling up data and computing power. “Some missing capabilities in reasoning and planning in memory still need to be cracked,” he said.
Although Hassabis predicted in April that artificial general intelligence (AGI) could emerge “in the next five to 10 years,” he admits that major challenges still stand in the way. His concerns mirror those of OpenAI CEO Sam Altman, who, following the launch of GPT-5, acknowledged that the model lacks continuous learning, something he views as crucial for achieving true AGI.
The warnings highlight a growing acknowledgment among AI leaders that issues like hallucinations, misinformation, and simple mistakes must be resolved before machines can reach human-level reasoning, a reminder akin to how social media platforms initially failed to foresee the large-scale impact of their technologies.