Liam Price, a 23-year-old amateur mathematician with no advanced training, solved a 60-year-old Erdős problem this month using ChatGPT GPT-5.4 Pro. Price entered the problem into the AI on an idle Monday afternoon in April 2026, unaware of its significance. The solution came from a single prompt and took roughly 80 minutes of AI reasoning. Fields Medalist Terence Tao validated the work, calling it a “meaningful contribution” that reveals a “previously undescribed connection” between integer anatomy and Markov process theory. The story is trending on Hacker News today with nearly 400 upvotes and 240 comments.
The democratization angle matters more than the mathematical details. Moreover, Price used a $20-per-month ChatGPT Pro subscription—the same tool anyone can access. He had no PhD, no years of specialized training, just casual curiosity and commodity AI access. His quote captures the approach: “I didn’t know what the problem was—I was just doing Erdős problems as I do sometimes, giving them to the AI.” This is what researchers call “vibe-maths”—casual experimentation without rigorous methodology. Consequently, the result challenges decades of assumptions about who can tackle advanced mathematics and what credentials matter.
Credentials Matter Less When AI Costs $20/Month
ChatGPT Pro provides access to GPT-5.4 Pro for $20 monthly, with up to 3,000 thinking messages per week. That’s enough to attempt dozens of unsolved mathematical problems. Price had no advanced mathematics training, yet his casual prompting solved a problem that stumped professional mathematicians for six decades. Therefore, the barrier to entry for advanced research just dropped from decades of PhD work to a Netflix subscription price.
The implications extend beyond mathematics. When tools that required years of specialized training become accessible for $20, credentials lose their gatekeeping power. Results matter more than degrees. Furthermore, access matters more than institutional affiliation. This doesn’t mean expertise becomes worthless—validation still requires experts, as we’ll see. However, exploration is now democratized. Anyone with curiosity and $20 can attempt problems professionals abandoned.
The academic establishment won’t acknowledge this shift easily. Traditional gatekeeping depends on scarcity—limited access to tools, knowledge, and computational resources. When scarcity disappears, gatekeeping fails. In fact, AI culture already prioritizes speed, openness, and reproducibility over credentials. Mathematics is catching up.
The 90-Year-Old Technique Humans Never Tried
Erdős Problem 1196 concerns primitive sets—collections of whole numbers where no number divides evenly into any other. Think of prime numbers, which can’t be divided by anything except themselves and one. The question asks whether the sum of 1/(a·log a) for all numbers in a primitive set approaches exactly one as the set grows infinitely large. Mathematicians Erdős, Sárközy, and Szemerédi posed this conjecture 60 years ago. Stanford mathematician Jared Lichtman spent four years proving a related primitive set conjecture during his doctorate. Nevertheless, Problem 1196 remained unsolved.
GPT-5.4 Pro found a solution using Markov chains combined with von Mangoldt weights—a mathematical technique that’s been around for 90 years but was never applied to this class of problems. Tao explained that previous researchers “collectively made a slight wrong turn” at the beginning. They followed what seemed like the obvious path, which led nowhere. Meanwhile, the AI had no preconceptions about the “standard sequence of moves” and tried a completely different approach.
This isn’t pattern matching or literature search. The technique existed, but no one connected it to primitive sets. Additionally, the AI revealed what Tao called a “previously undescribed connection” between integer anatomy and Markov process theory. Lichtman confirmed this: “This new method is really confirming that intuition about problem clustering.” As a result, the approach may unlock solutions to related problems that follow similar patterns.
From Literature Search to Genuine Insight
Since October 2025, AI tools have helped solve roughly 100 Erdős problems, according to Tao’s tracker. Fifteen moved from “open” to “solved” since Christmas, with eleven crediting AI involvement. However, most weren’t novel insights—they were sophisticated literature searches. AI models found published papers humans weren’t aware of, pieced together existing theorems, and connected dots across disparate research. Useful, but not creative.
This case breaks that pattern. The von Mangoldt weight approach wasn’t hidden in an obscure paper—it simply hadn’t been tried. Moreover, Tao called it “a new way to think about large numbers and their anatomy.” That’s not retrieval; that’s discovery. The debate about whether AI can generate genuine mathematical insight just shifted. Skeptics still have ammunition—GPT-5.2 scores 77% on competition mathematics but only 25% on open-ended research problems. These remain “low-hanging fruit” in Tao’s assessment, problems solvable with standard techniques rather than profound breakthroughs.
But the gap between “souped-up literature search” and “novel connection” matters. One is useful automation; the other is augmented intelligence. Therefore, whether AI truly “understands” mathematics or just navigates knowledge space brilliantly doesn’t change the practical outcome: problems get solved, and new approaches emerge.
Raw AI Output Still Needs Expert Refinement
The romantic narrative of amateur-plus-AI beating the experts needs qualification. Lichtman was blunt: “The raw output of ChatGPT’s proof was actually quite poor. So it required an expert to kind of sift through.” Price sent the solution to Kevin Barreto, a second-year mathematics undergraduate at Cambridge and his occasional collaborator. Barreto recognized its significance and contacted Tao and Lichtman. Subsequently, they refined and shortened the proof post-publication. The breakthrough wasn’t autonomous AI discovery—it was a human-AI collaboration.
The new model looks like this: amateurs explore using $20 AI tools, experts validate and refine. Price provides the curiosity and prompting. The AI provides the search and connection-making. Barreto provides the recognition of importance. Finally, Tao and Lichtman provide the mathematical rigor and community acceptance. Remove any piece and the breakthrough doesn’t happen. This isn’t AI replacing mathematicians; it’s AI democratizing exploration while validation remains human.
Credentials still matter, but differently. You don’t need a PhD to find a solution anymore. However, you do need expert validation before the mathematical community accepts it. The barrier lowered, but didn’t disappear. In other words, the gatekeeping shifted from exploration to verification. That’s progress, even if it’s not revolution.
Key Takeaways
- Access beats credentials: A $20 ChatGPT Pro subscription can tackle problems that stumped PhDs for 60 years—exploration is democratized even if validation still requires experts.
- Novel approach validated: The AI found a 90-year-old mathematical technique (Markov chains with von Mangoldt weights) that no human researcher applied to this problem class, revealing what Tao calls a “previously undescribed connection.”
- Hybrid model, not replacement: Raw AI output was “quite poor” and required expert refinement—the breakthrough came from amateur exploration plus professional validation, not autonomous AI discovery.
- From search to insight: Unlike the ~100 previous AI-solved Erdős problems (mostly literature searches), this solution introduces a genuinely novel mathematical connection rather than just aggregating existing knowledge.
- New research paradigm: “Vibe-maths” experimentation by curious amateurs combined with rigorous expert validation creates a research model untethered from traditional academic gatekeeping.












