AI “vibe-mathing” solves a 60-year math conjecture

AI vibe-mathing – A ChatGPT Pro–generated proof tackled a longtime Erdős-style limit question, using a fresh technique that experts say could change how researchers reason about large numbers.
AI “vibe-mathing” just leapt from internet meme to something closer to mathematical machinery, with a newly discussed proof that landed on a decades-old target.
The breakthrough centers on a limit conjecture tied to Erdős’s “primitive sets” and a score Erdős devised for them.. The key twist: the solution credited to a large language model appears to take a route that experienced human researchers had not tried—at least not in the way that the newly simplified proof now suggests.
At the center of the story is Liam Price. a 23-year-old who is not a trained specialist in advanced mathematics. but who used ChatGPT Pro to request an answer to an Erdős problem listed on a dedicated site.. According to the account shared alongside the work. Price entered the problem casually. not as part of a long campaign of study.. The model responded with something that looked like a correct proof. prompting further checking by others familiar with the landscape of Erdős-problem research.
The subject matter is deceptively simple to describe.. A primitive set is a collection of whole numbers arranged so that no number in the set divides evenly into another.. Erdős introduced the concept because it generalizes the idea behind prime numbers—primes are “primitive” in the sense that no prime divides another prime.. But Erdős then went beyond definitions.. He attached a numerical score—calculated from the elements of a primitive set—and asked questions about how large that score can get. and what happens under constraints that push the set toward enormous numbers.
One of these questions has the character of a “ceiling” followed by a “floor.” Erdős proposed an upper bound and later conjectured corresponding behavior as numbers grow without limit. imagining that the score would shrink toward a value of 1.. Lichtman proved the earlier upper-bound-type claim as part of his doctoral work in 2022.. Yet the lower-limit question resisted attempts by the mathematics community for years. becoming a kind of open door that many knew how to approach—but couldn’t quite enter.
Misryoum: the surprising part is not only that an LLM can produce steps resembling a proof. but that experts now say it may have connected the problem to a known formula from a related area of mathematics—one that earlier attempts did not apply in this specific setting.. Mathematician Terence Tao. who has become a prominent observer of AI’s recent incursions into mathematical proof attempts. characterized the history of human work as a “slight wrong turn” early on.. He also suggested that the problem may have been blocked more by route-finding than by deep impossibility: not every hard-looking problem is hard for the same reason.
Misryoum analysis: why that distinction matters is practical.. Many of the most publicized AI wins in mathematics can be “impressive” in the sense that they yield correct answers. but less useful scientifically because the reasoning path may be indirectly copied from patterns in training data or may not generalize.. Here, the emphasis from researchers is that the method appears meaningfully different.. Even if the conjecture is ultimately resolved by humans refining the proof. the LLM’s role becomes a map generator—suggesting an unfamiliar trail that later specialists can widen into something solid.
Still, experts stress that the raw output from the model was not presentation-ready.. Lichtman described the initial proof output as “poor” in form, requiring an expert to sift, interpret, and reorganize the reasoning.. That’s a familiar theme in AI-assisted mathematics: the most valuable contribution may be the spark—an idea a researcher can then discipline into a coherent argument.. In this case. Tao and Lichtman say they shortened and distilled the proof so it better captures the underlying insight rather than the model’s messier scaffolding.
Beyond the immediate result, there’s a broader claim forming around the technique itself.. Tao described it as a “new way to think about large numbers and their anatomy. ” implying that the approach might carry to other problems where sets of integers behave in structured but unintuitive ways.. Lichtman echoed a similar sentiment. connecting the proof to an intuition from graduate school: that certain Erdős-style problems feel “clustered together. ” as if they share a unifying mathematical logic.
For a human perspective, the story lands with a familiar modern irony.. The proof was prompted by an idle session—less like a planned research sprint and more like a tool test.. That doesn’t replace the work of specialists; it simply changes who might stumble onto an unexpected lead.. In fields where progress often depends on long-term persistence and deep familiarity with specialized techniques. a system that can propose unconventional starting points can compress the early stages of discovery.
The open question now. Misryoum’s editorial take. is how far this kind of “vibe-mathing” can scale from a single problem to an engine for systematic breakthroughs.. A one-off win can’t settle the debate about AI’s ultimate role in mathematics.. But if experts can repeatedly recover LLM-generated arguments and identify genuinely new conceptual connections—rather than only rearranged old ones—then the value becomes clear.. The proof is less a spectacle than a signal: that language-model reasoning. carefully curated by mathematical expertise. may begin to offer not just answers. but alternative lenses for seeing why a problem is true.