“Erdos problem #728 was solved more or less autonomously by AI”

1 day ago (mathstodon.xyz)

I work at Harmonic, the company behind Aristotle.

To clear up a few misconceptions:

- Aristotle uses modern AI techniques heavily, including language modeling.

- Aristotle can be guided by an informal (English) proof. If the proof is correct, Aristotle has a good chance at translating it into Lean (which is a strong vote of confidence that your English proof is solid). I believe that's what happened here.

- Once a proof is formalized into Lean (assuming you have formalized the statement correctly), there is no doubt that the proof is correct. This is the core of our approach: you can do a lot of (AI-driven) search, and once you find the answer you are certain it's correct no matter how complex the solution is.

Happy to answer any questions!

  • How do you verify that the AI translation to Lean is a correct formalization of the problem? In other fields, generative AI is very good at making up plausible sounding lies, so I'm wondering how likely that is for this usage.

    • That's what's covered by the "assuming you have formalized the statement correctly" parenthetical.

      Given a formal statement of what you want, Lean can validate that the steps in a (tedious) machine-readable purported proof are valid and imply the result from accepted axioms. This is not AI, but a tiny, well reviewed kernel that only accepts correct formal logic arguments.

      So, if you have a formal statement that you've verified to represent what you are interested in by some other means, Lean can tell you whether the proof created by genAI is correct. Basically, there is a nigh infallible checker that won't accept incorrect hallucinations.

      50 replies →

    • It may help to look at this example concretely:

      The natural-language statement of the problem is (from https://www.erdosproblems.com/728):

      > Let C>0 and ϵ>0 be sufficiently small. Are there infinitely many integers a,b,n with a≥ϵn and b≥ϵn such that a!b!∣n!(a+b−n)! and a+b>n+Clogn?

      The Lean-language statement of the problem (which can be done either by hand or by AI) is (from https://github.com/plby/lean-proofs/blob/f44d8c0e433ab285541...):

          ∀ᶠ ε : ℝ in [>] 0, ∀ C > (0 : ℝ), ∀ C' > C,
            ∃ a b n : ℕ,
              0 < n ∧
              ε * n < a ∧
              ε * n < b ∧
              a ! * b ! ∣ n ! * (a + b - n)! ∧
              a + b > n + C * log n ∧
              a + b < n + C' * log n
      

      Yes on the one hand, one needs to know enough about Lean to be sure that this formulation matches what we intend, and isn't stating something trivial. But on the other hand, this is not as hard as finding an error on some obscure line of a long proof.

      (There's also an older formulation at https://github.com/google-deepmind/formal-conjectures/blob/f... but the new one is more in the spirit of what was intended: see the discussion starting at https://www.erdosproblems.com/forum/thread/728#post-2196 which gives a clear picture, as of course does Tao's thread in the OP that summarizes this discussion.)

      1 reply →

    • For this reason, when we announce results on e.g. the IMO, we formalize the statements by hand and inspect the proofs carefully to ensure they capture the full spirit of the problem.

      However, there are some good heuristics. If you expect a problem to be hard and the proof is very short, you've probably missed something!

      4 replies →

    • To answer the question a different way, I think you are asking how we know the proof actually matches the description the human provided? And I'd say we can't know for sure, but the idea is that you can pretty concisely write and check yourself that the problem is accurate, i.e. "There are an infinite number of primes" or whatever, and then even if an LLM goes off and makes up a lean proof wildly different from your description, if lean says the proof is valid then you have proven the original statement. I guess in theory the actual proof could be way different than what you thought it would be, but ultimately all the logic will still check out.

    • I feel like even outside of AI translation, formalization not capturing the spirit of what the informal description was provided is always a risk.

      This is also a big risk when trying to prove code correctness: "prove this algo works" means you gotta define "works" along certain axes, and if you're very unlucky you might have a proof that exploits the uncertainty around a certain axis.

    • The statement is something you provide. It's the search you can have the LLM do. If this works for math it will immediately make code way higher quality via the same tools.

    • You're looking for the practical answer, but philosophically it isn't possible to translate an informal statement into a formal one 'correctly'. It is informal, ie, vaguely specified. The only certain questions are if the formal axioms and results are interesting which is independent of the informal formalisation and that can only be established by inspecting the the proof independently of the informal spec.

      7 replies →

  • First congrats!

    Sometimes when I'm using new LLMs I'm not sure if it’s a step forward or just benchmark hacking, but formalized math results always show that the progress is real and huge.

    When do you think Harmonic will reach formalizing most (even hard) human written math?

    I saw an interview with Christian Szegedy (your competitor I guess) that he believes it will be this year.

    • Thank you! It depends on the topic. Some fields (algebra, number theory) are covered well by Lean's math library, and so I think we are already there; I recommend trying Aristotle for yourself to see how reliably it can formalize these theorems!

      In other fields (topology, probability, linear algebra), many key definitions are not in Mathlib yet, so you will struggle to write down the theorem itself. (But in some cases, Aristotle can define the structure you are talking about on the fly!)

      This is not an intrinsic limitations of Lean, it's just that nobody has taken the time to formalize much of those fields yet. We hope to dramatically accelerate this process by making it trivial to prove lemmas, which make up much of the work. For now, I still think humans should write the key definitions and statements of "central theorems" in a field, to ensure they are compatible with the rest of the library.

      2 replies →

  • Is anyone working on applying these techniques to formal verification of software?

    My limited understanding of Rust is that it applies a fixed set of rules to guarantee memory safety. The rules are somewhat simple and limiting, for ease of understanding and implementation, but also because of undecidability.

    Programmers run into situations where they know that their code won't cause memory errors, but it doesn't follow the rules. Wouldn't it be cool if something like Aristotle was integrated into the compiler? Any code for which a proof of correctness could be written would pass/compile, without having to add more and more rules

    • An issue with this approach is that it may not be robust. That is, you could run into a casr where a minor modification of your program is suddenly not provable anymore, even though it is still correct. The heuristic (AI or otherwise) has necessarily limits, and if your are close to the "edge" of its capabilities then a minor change could push it across.

      If the proof is rooted in the programmer's understanding who can give proof hints to the prover then any modification of the program can then be accompanied with a modification of the hints, still allowing automatic proofs. But if the human has no clue then the automatic system can get stuck without the human having a chance to help it along.

      1 reply →

    • Formal verification of program correctness is also (for obvious reasons) key to unlocking AI-driven synthesis (i.e. 'vibe' coding) of "correct" programs that will verifiably meet the given spec.

      4 replies →

  • > If the proof is correct, Aristotle has a good chance at translating it into Lean

    How does this depend on the area of mathematics of the proof? I was under the impression that it was still difficult to formalize most research areas, even for a human. How close is Aristotle to this frontier?

  • >assuming you have formalized the statement correctly

    That's a pretty big assumption, though, isn't it? As we saw the Navier-Stokes psychosis episode over the New Year holiday, formalizing correctly really isn't guaranteed.

  • What occurs when this process is reversed - translate from lean to informal english, and does iterating this then help research better approaches toward writing proofs in human language?

    • I had the same thought but unfortunately even if that translation is accurate it could still be bidirectional hallucinating and would not really be sufficient evidence...

      It's another reformulation rather than a true proof. Now, instead of wanting a proof of a theorem, now we just need to prove that this proof is actually proving the theorem. The proof itself being so incomprehensible that it can't on its own be trusted, but if it can be shown that it can be trusted then the theorem must be true.

  • What are the benefits of Aristotle over a general-purpose coding assistant like Claude Code?

    • Aristotle's output is formally verified in Lean, so you can run it for days on a hard problem and be assured that the answer, no matter how complex, is right without needing to manually check it.

      Claude Code can write lean, but we do a heck of a lot of RL on theorem proving, so Aristotle winds up being much better at writing Lean than other coding agents are.

      4 replies →

  • Do you have plans to apply this broadly to the historical math literature?

    • Yes! I think that working with Mathlib is the best long term solution, because it's how people already collaborate on building out the formal "universe of mathematics." We want to speed that up, and hopefully we'll cover all of the common topics very soon!

  • > there is no doubt that the proof is correct.

    Do you have any links to reading about how often lean core has soundness bugs or mathlib has correctness bugs?

  • This is the forst time I heard about Aristotle and find it very interesting. First question first: is it available for the general public? I don't know if this is the page to try it? [1]

    Second, when you say language modeling support, it means that can better understand code representation (ASTs) or something else? I am just an AI user, not very knowledgeable in the field. My main interest is if it would be great for static analysis oriented to computer security (SAST).

    [1] https://aristotle.ai/

  • You seem to be openly contradicting your company's PR and language. Your description very clearly describes the "AI" as a tool to translate relatively informal specifications into formal proof logic, but does not itself do the proving.

Based on Tao’s description of how the proof came about - a human is taking results backwards and forwards between two separate AI tools and using an AI tool to fill in gaps the human found?

I don’t think it can really be said to have occurred autonomously then?

Looks more like a 50/50 partnership with a super expert human one the one side which makes this way more vague in my opinion - and in line with my own AI tests, ie. they are pretty stupid even OPUS 4.5 or whatever unless you're already an expert and is doing boilerplate.

EDIT: I can see the title has been fixed now from solved to "more or less solved" which is still think is a big stretch.

  • You're understanding correctly, this is back and forth between Aristotle and ChatGPT and a (very smart) user.

    • I'm not sure i understand the wild hype here in this thread then.

      Seems exactly like the tests at my company where even frontier models are revealed to be very expensive rubber ducks, but completely fails with non experts or anything novel or math heavy.

      Ie. they mirror the intellect of the user but give you big dopamine hits that'll lead you astray.

      48 replies →

    • Exactly "The Geordi LaForge Paradox" of "AI" systems. The most sophisticated work requires the most sophisticated user, who can only become sophisticated the usual way --- long hard work, trial and error, full-contact kumite with reality, and a degree of devotion to the field.

    • https://www.erdosproblems.com/forum/thread/728#post-2808

      > There seems to be some confusion on this so let me clear this up. No, after the model gave its original response, I then proceeded to ask it if it could solve the problem with C=k/logN arbitrarily large. It then identified for itself what both I and Tao noticed about it throwing away k!, and subsequently repaired its proof. I did not need to provide that observation.

      so it was literally "yo, your proof is weak!" - "naah, watch this! [proceeds to give full proof all on its own]"

      I'd say that counts

  • I had the impression Tao/community weren't even finding the gaps, since they mentioned using an automatic proof verifier. And that the main back and forth involved re-reading Erdos' paper to find out the right problem Erdos intended. So more like 90/10 LLM/human. Maybe I misread it.

  • This website was made by Thomas Bloom, a mathematician who likes to think about the problems Erdős posed. Technical assistance with setting up the code for the website was provided by ChatGPT -from the FAQ

  • > EDIT: I can see the title has been fixed now from solved to "more or less solved" which is still think is a big stretch.

    "solved more or less autonomously by AI" were Tao's exact words, so I think we can trust his judgment about how much work he or the AI did, and how this indicates a meaningful increase in capabilities.

  • Is a good economic decision to hype a bit the importance of the LLM$.

Reconfiguring existing proofs in ways that have been tedious or obscured from humans, or using well framed methods in novel ways, will be done at superhuman speeds, and it'll unlock all sorts of capabilities well before we have to be concerned about AGI. It's going to be awesome to see what mathematicians start to do with AI tools as the tools become capable of truly keeping up with what the mathematicians want from the tools. It won't necessarily be a huge direct benefit for non-mathematicians at first, because the abstract and complex results won't have direct applications, but we might start to see millenium problems get taken down as legitimate frontier model benchmarks.

Or someone like Terence Tao might figure out how to wield AI better than anyone else, even the labs, and use the tools to take a bunch down at once. I'm excited to see what's coming this year.

  • I don't think there's a real boundary between reconfiguring existing proofs and combining existing methods and "truly novel" math

  • > Reconfiguring existing proofs in ways that have been tedious or obscured from humans,

    To a layman, that doesn't sound like very AI-like? Surely there must be a dozen algorithms to effectively search this space already, given that mathematics is pretty logical?

    • I actually know about this a bit since it was part of what I was studying with my incomplete PhD.

      Isabelle has had the "Sledgehammer" tool for quite awhile [1]. It uses solvers like z3 to search and apply a catalog of proof strategies and then try and construct a proof for your main proof or any remaining subtasks that you have to complete. It's not perfect but it's remarkably useful (even if it does sometimes give you proofs that import like ten different libraries and are hard to read).

      I think Coq has Coqhammer but I haven't played with that one yet.

      [1] https://isabelle.in.tum.de/dist/doc/sledgehammer.pdf

      4 replies →

    • The issue with traditional logic solvers ('good old-fashioned AI') is that the search space is extremely large, or even infinite.

      Logic solvers are useful, but not tractable as a general way to approach mathematics.

      1 reply →

  • I agree only with the part about reconfiguring existing proofs. That's the value here. It is still likely very tedious to confirm what the LLMs say, but at least it's better than waiting for humans to do this half of the work.

    For all topics that can be expressed with language, the value of LLMs is shuffling things around to tease out a different perspective from the humans reading the output. This is the only realistic way to understand AI enough to make it practical and see it gain traction.

    As much as I respect Tao, I feel like his comments about AI usage can be misleading without carefully reading what he is saying in the linked posts.

    • > It is still likely very tedious to confirm what the LLMs say,

      A large amount of Tao's work is around using AI to assist in creating Lean proofs.

      I'm generally on the more skeptical side of things regarding LLMs and grand visions, but assisting in the creation of Lean proofs is a huge area of opportunity for LLMs and really could change mathematics in fundamental ways.

      One naive belief many people have is that proofs should be "intelligible" but it's increasingly clear this is not the case. We have proofs that are gigabytes (I believe even terabytes in some cases) in size, but we know they are correct because they check in Lean.

      This particular pattern of using state of the art work in two different areas (LLMs and theorem proving) absolutely has the potentially to fundamentally change how mathematics is done. There's a great picture on pp 381 of Type Theory and Formal Proof where you can easily see how LLMs can be placed in two of the most tricky parts of that diagram to solve.

      Because the work is formally verified we can throw out entire classes of LLM problems (like hallucinations).

      Personally I think strongly typed language, with powerful type systems are also the long term ideal coding with LLMs (but I'm less optimistic about devs following this path).

      10 replies →

  • This is what has excited me for many years - the idea I call "scientific refactoring"

    What happens if we reason upwards but change some universal constants? What happens if we use Tao instead of Pi everywhere, these kind of fun questions would otherwise require an enormous intellectual effort whereas with the mechanisation and automation of thought, we might be able to run them and see!

    • Not just for math, but ALL of Science suffers heavily from a problem of less than 1% of the published works being capable of being read by leading researchers.

      Google Scholar was a huge step forward for doing meta-analysis vs a physical library.

      But agents scanning the vastness of PDFs to find correlations and insights that are far beyond human context-capacity will I hope find a lot of knowledge that we have technically already collected, but remain ignorant of.

      6 replies →

    • I can write a sed command/program that replaces every occurence of PI with TAU/2 in LaTeX formulas and it'll take me about 30 minutes.

      The "intellectual effort" this requires is about 0.

      Maybe you meant Euler's number? Since it also relates to PI, it can be used and might actually change the framework in an "interesting way" (making it more awkward in most cases - people picked PI for a reason).

      2 replies →

    • I'm using LLMs to rewrite every formula featuring the Gamma function to instead use the factorial. Just let "z!" mean "Gamma(z+1)", substitute everywhere, and simplify. Then have the AI rewrite any prose.

      1 reply →

  • If this isn't AGI, what is? It seems unavoidable that an AI which can prove complex mathematical theorems would lead to something like AGI very quickly.

    • Tao has a comment relevant to that question:

      "I doubt that anything resembling genuine "artificial general intelligence" is within reach of current #AI tools. However, I think a weaker, but still quite valuable, type of "artificial general cleverness" is becoming a reality in various ways.

      By "general cleverness", I mean the ability to solve broad classes of complex problems via somewhat ad hoc means. These means may be stochastic or the result of brute force computation; they may be ungrounded or fallible; and they may be either uninterpretable, or traceable back to similar tricks found in an AI's training data. So they would not qualify as the result of any true "intelligence". And yet, they can have a non-trivial success rate at achieving an increasingly wide spectrum of tasks, particularly when coupled with stringent verification procedures to filter out incorrect or unpromising approaches, at scales beyond what individual humans could achieve.

      This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing - somewhat akin to how one's awe at an amazingly clever magic trick can dissipate (or transform to technical respect) once one learns how the trick was performed.

      But perhaps this can be resolved by the realization that while cleverness and intelligence are somewhat correlated traits for humans, they are much more decoupled for AI tools (which are often optimized for cleverness), and viewing the current generation of such tools primarily as a stochastic generator of sometimes clever - and often useful - thoughts and outputs may be a more productive perspective when trying to use them to solve difficult problems."

      This comment was made on Dec. 15, so I'm not entirely confident he still holds it?

      https://mathstodon.xyz/@tao/115722360006034040

    • The "G" in "AGI" stands for "General".

      While quickly I noticed that my pre-ChatGPT-3.5 use of the term was satisfied by ChatGPT-3.5, this turned out to be completely useless for 99% of discussions, as everyone turned out to have different boolean cut-offs for not only the generality, but also the artificiality and the intelligence, and also what counts as "intelligence" in the first place.

      That everyone can pick a different boolean cut-off for each initial, means they're not really booleans.

      Therefore, consider that this can't drive a car, so it's not fully general. And even those AI which can drive a car, can't do so in genuinely all conditions expected of a human, just most of them. Stuff like that.

      2 replies →

    • AGI in its standard definition requires matching or surpassing humans on all cognitive tasks, not just in some, especially some where only handful of humans took a stab on.

      4 replies →

    • This is very narrow AI, in a subdomain where results can be automatically verified (even within mathematics that isn't currently the case for most areas).

      7 replies →

For context, Terence Tao started a wiki page titled “AI contributions to Erdős problems”: https://github.com/teorth/erdosproblems/wiki/AI-contribution... (as mentioned in an earlier post https://mathstodon.xyz/@tao/115818402639190439) — even relative to when he started this page less than two weeks ago (Dec 31), the current result (for problem [728]) represents a milestone: it is the first green in Section 1 of that wiki page.

Can anyone with specific knowledge in a sophisticated/complex field such as physics or math tell me: do you regularly talk to AI models? Do feel like there's anything to learn? As a programmer, I can come to the AI with a problem and it can come up with a few different solutions, some I may have thought about, some not.

Are you getting the same value in your work, in your field?

  • Context: I finished a PhD in pure math in 2025 and have transitioned to being a data scientist and I do ML/stats research on the side now.

    For me, deep research tools have been essential for getting caught up with a quick lit review about research ideas I have now that I'm transitioning fields. They have also been quite helpful with some routine math that I'm not as familiar with but is relatively established (like standard random matrix theory results from ~5 years ago).

    It does feel like the spectrum of utility is pretty aligned with what you might expect: routine programming > applied ML research > stats/applied math research > pure math research.

    I will say ~1 year ago they were still useless for my math research area, but things have been changing quickly.

  • I don't have a degree in either physics or math, but what AI helps me to do is to stay focused on the job before me rather than to have to dig through a mountain of textbooks or many wikipedia pages or scientific papers trying to find an equation that I know I've seen somewhere but did not register the location of and did not copy down. This saves many days, every day. Even then I still check the references once I've found it because errors can and do slip into anything these pieces of software produce, and sometimes quite large ones (those are easy to spot though).

    So yes, there is value here, and quite a bit but it requires a lot of forethought in how you structure your prompts and you need to be super skeptical about the output as well as able to check that output minutely.

    If you would just plug in a bunch of data and formulate a query and would then use the answer in an uncritical way you're setting yourself up for a world of hurt and lost time by the time you realize you've been building your castle on quicksand.

  • I do / have done research in building deep learning models and custom / novel attention layers, architectures, etc., and AI (ChatGPT) is tremendously helpful in facilitating (semantic) search for papers in areas where you may not quite know the magic key words / terminology for what you are looking for. It is also very good at linking you to ideas / papers that you might not have realized were related.

    I also found it can be helpful when exploring your mathematical intuitions on something, e.g. like how a dropout layer might effect learned weights and matrix properties, etc. Sometimes it will find some obscure rigorous math that can be very enlightening or relevant to correcting clumsy intuitions.

    • Apropos your account name, I just wanted to mention that I used various Xerox D machines back in the day. They were fun.

  • I'm an active researcher in TCS. For me, AI has not been very helpful on technical things (or even technical writing), but has been super helpful for (1) literature reviews; (2) editing papers (e.g., changing a convention everywhere in the paper); and (3) generating Tikz figures/animations.

  • I did a theoretical computer science PhD a few years ago and write one or two papers a year in industry. I have not had much success getting models to come up with novel ideas or even prove theorems, but I have had some success asking them to prove smaller and narrower results and using them as an assistant to read papers (why are they proving this result, what is this notation they're using, expand this step of their proof, etc). Asking it to find any bugs in a draft before Arxiving also usually turns up some minor things to clarify.

    Overall: useful, but not yet particularly "accelerating" for me.

  • I talk to them (math research in algebraic geometry) not really helpful outside of literature search unfortunately. Others around me get a lot more utility so it varies. (Most powerful model i tried was Gemini 2.5 deep think and Gemini 3.0 pro) not sure if the new gpts are much better

  • I work in quantum computing. There is quite a lot of material about quantum computing out there that these LLMs must have been trained on. I have tried a few different ones, but they all start spouting nonsense about anything that is not super basic.

    But maybe that is just me. I have read some of Terence Tao's transcripts, and the questions he asks LLMs are higher complexity than what I ask. Yet, he often gets reasonable answers. I don't yet know how I can get these tools to do better.

    • This often feels like an annoying question to ask, but what models were you using?

      The difference between free ChatGPT, GPT-5.2 Thinking, and GPT-5.2 Pro is enormous for areas like logic and math. Often the answer to bad results is just to use a better model.

      Additionally, sometimes when I get bad results I just ask the question again with a slightly rephrased prompt. Often this is enough to nudge the models in the right direction (and perhaps get a luckier response in the process). However, if you are just looking at a link to a chat transcript, this may not be clear.

      1 reply →

    • "I don't yet know how I can get these tools to do better."

      I have wondered if he has access to a better model than I, the way some people get promotional merchandise. A year or two ago he was saying the models were as good as an average math grad student when to me they were like a bad undergrad. In the current models I don't get solutions to new problems. I guess we could do some debugging and try prompting our models with this Erdos problem and see how far we get. (edit: Or maybe not; I guess LLMs search the web now.)

    • This was also my experience with certain algorithms in the realm of scheduling.

  • I’m a hobbyist math guy (with a math degree) and LLMs can at least talk a little talk or entertain random attempts at proofs I make. In general they rebuke my more wild attempts, and will lead me to well-trodden answers for solved problems. I generally enjoy (as a hobby) finding fun or surprising solutions to basic problems more than solving novel maths, so LLMs are fun for me.

  • As the other person said, Deep Research is invaluable; but generating hypotheses is not as good at the true bleeding edge of the research. The ChatGPT 4.0 OG with no guardrails, briefly generated outrageously amazing hypotehses that actually made sense. After that they have all been neutered beyond use in this direction.

  • My experience has been mixed. Honestly though, talking to AI and discussing a problem with it is better than doing nothing and just procrastinating. It's mostly wrong, but the conversation helps me think. In the end, once my patience runs out and my own mind has been "refreshed" through the conversation (even if it was frustrating), I can work on it myself. Some bits of the conversation will help but the "one-shot" doesn't exist. tldr: ai chatbots can get you going, and may be better than just postponing and procrastinating over the problem you're trying to solve.

You can try out Aristotle yourself today https://aristotle.harmonic.fun/. No more waitlist!

I have kept track of a few instances where AI has been applied to real and genuine problems. ~

Not trivial problems. Issues with possible solutions, errors, and unresolved history.

AI did not \\"solve\\" any issues on its own, but what stood out to me was the speed at which concepts could be rewritten, restructured and tested for stress.

A mental model that has been useful to me is that AI is not particularly good at providing the first answer, however, it is very good at providing the second, third, and tenth versions of the answer, especially when the first answer has already been identified as weak by a human.

In these instances, the progress seemed to stem from the AI being able to: Quickly reword and restate a given argument. Convert implicit assumptions into explicit ones. Identify small gaps in logic before they became large.

What I have been grappling with is how to differentiate when AI is just clarifying versus when it is silently hallucinating structure. Is the output of AI being treated as a draft, a reviewer, a rubber duck, or some combination? When is the output so fast that the rigor of thought is compromised? I am interested in how others are using AI for hard thinking and not just for writing cleanup.

This is great, there is still so much potential in AI once we move beyond LLMs to specialized approaches like this.

EDIT: Look at all the people below just reacting to the headline and clearly not reading the posts. Aristotle (https://arxiv.org/abs/2510.01346) is key here folks.

EDIT2: It is clear much of the people below don't even understand basic terminology. Something being a transformer doesn't make it an LLM (vision transformers, anyone) and if you aren't training on language (e.g. AlphaFold, or Aristotle on LEAN stuff), it isn't a "language" model.

  • > It is clear much of the people below don't even understand basic terminology. Something being a transformer doesn't make it an LLM (vision transformers, anyone) and if you aren't training on language (e.g. AlphaFold, or Aristotle on LEAN stuff), it isn't a "language" model.

    I think it's because it comes off as you are saying that we should move off of GenAI, and alot of people use LLM when they mean GenAI.

    • Ugh, you're right. This was not intended. Conflating LLMs with GenAI is a serious error, but you're right, it is obviously a far more common error than I realized. I clearly should have said "move beyond solely LLMs" or "move beyond LLMs in isolation", perhaps this would have avoided the confusion.

      This is a really hopeful result for GenAI (fitting deep models tuned by gradient descent on large amounts of data), and IMO this is possible because of specific domain knowledge and approaches that aren't there in the usual LLM approaches.

  • Every stage of this 3-stage pipeline is an LLM.

    1. "The search algorithm is a highly parallel Monte Carlo Graph Search (MCGS) using a large transformer as its policy and value functon." ... "We use a generative policy to take progressively widened [7] samples from the large action space of Lean tactics, conditioning on the Lean proof state, proof history, and, if available, an informal proof. We use the same model and prompt (up to a task token) to compute the value function which guides the search."

    See that 'large transformer' phrase? That's where the LLM is involved.

    2. "A lemma-based informal reasoning system which generates informal proofs of mathematical state-ments, breaks these proofs down into lemmas, formalizes each lemma into Lean, and iterates this process based on formal feedback" ... "First, the actions it generates consist of informal comments in addition to Lean tactics. Second, it uses a hidden chain of thought with a dynamically set thinking budget before predicting an action."

    Unless you're proposing that this team solved AGI, "chain of thought" is a specific term of art in LLMs.

    3. "A geometry solver which solves plane geometry problems outside of Lean using an approach based on AlphaGeometry [45]." ... following the reference: "AlphaGeometry is a neuro-symbolic system that uses a neural language model, trained from scratch on our large-scale synthetic data, to guide a symbolic deduction engine through infinite branching points in challenging problems. "

    AlphaGeometry, like all of Deepmind's Alpha tools, is an LLM.

    Instead of accusing people of not reading the paper, perhaps you should put some thought into what the things in the paper actually represent.

    • If you think "transformer" = LLM, you don't understand the basic terminology of the field. This is like calling AlphaFold an LLM because it uses a transformer.

      2 replies →

Very cool to see how far things have come with this technology!

Please remember that this is a theorem about integers that is subject to a fairly elementary proof that is well-supported by the existing Mathlib infrastructure. It seems that the AI relies on the symbolic proof checker, and the proofs that it is checking don't use very complex definitions in this result. In my experience, proofs like this which are one step removed from existing infra are much much more likely to work.

Again though, this is really insanely cool!!

2026 should be interesting. This stuff is not magic, and progress is always going to be gradual with solutions to less interesting or "easier" problems first, but I think we're going to see more milestones like this with AI able to chip away around the edges of unsolved mathematics. Of course, that will require a lot of human expertise too: even this one was only "solved more or less autonomously by AI (after some feedback from an initial attempt)".

People are still going to be moving the goalposts on this and claiming it's not all that impressive or that the solution must have been in the training data or something, but at this point that's kind of dubiously close to arguing that Terence Tao doesn't know what he's talking about, which to say the least is a rather perilous position.

At this point, I think I'm making a belated New Years resolution to stop arguing with people who are still staying that LLMs are stochastic parrots that just remix their training data and can never come up with anything novel. I think that discussion is now dead. There are lots of fascinating issues to work out with how we can best apply LLMs to interesting problems (or get them to write good code), but to even start solving those issues you have to at least accept that they are at least somewhat capable of doing novel things.

In 2023 I would have bet hard against us getting to this point ("there's no way chatbots can actually reason their way through novel math!"), but here we are are three years later. I wonder what comes next?

  • Uh, this was exactly a "remix" of similar proofs that most likely were in the training data. It's just that some people misunderestimate how compelling that "remix" ability can be, especially when paired with a direct awareness of formal logical errors in one's attempted proof and how they might be addressed in the typical case.

    • Then what sort of math problem would be a milestone for you where an AI was doing something novel?

      Or are you just saying that solving novel problems involves remixing ideas? Well, that's true for human problem solving too.

      4 replies →

  • The goalposts are still the same. We want to be able to independently verify that an AI can do something instead of just hearing such a claim from a corporation that is absolutely willing to lie through their teeth if it gets them money.

    • Not disagreeing with you, but I don't think Tao is blowing this out of proportion either. I think it's a pretty reasonable way of saying, "Hey, AI is now capable of something it wasn't able to do before".

Digging through the PDFs on Google Drive, this seems to be (one of) the generated proofs. I may be misunderstanding something, but 1400 lines of AI-generated code seems a very good place for some mistake in the translation to sneak in https://github.com/plby/lean-proofs/blob/main/src/v4.24.0/Er...

Though I suppose if the problem statement in Lean is human-generated and there are no ways to "cheat" in a Lean proof, the proof could be trusted without understanding it

When Deep Blue beat Kaspaorov, it was not the end of career for human players. But since mathematics is not a sport with human players, what are the career prospects for mathematicians or mathematics-like fields?

  • I think its worth saying two things:

    1. This result is very far from showing something like "human mathematicians are no longer needed to advance mathematics".

    2. Even if it did show that, as long as we need humans trained in understanding maths, since "professional mathematicians" are mostly educators, they probably aren't going anywhere.

    • I wouldn't say professional mathematicians are mostly educators. The educating that mathematicians do even at graduate level to non-future-mathematicians can mostly be done (not fully at parity due to depth of understanding that we accumulate but close) by non professional mathematicians. Most of the education is to other current/future mathematicians in my limited opinion.

    • > ... are mostly educators, they probably aren't going anywhere

      Educator business survived so far, only because they provided in-person interactive knowledge transfer and credentials - both were not possible by static sources of knowledge such as libraries and internet. But now all that is possible without involvement of human teachers.

  • Tao's broad project, which he has spoken about a few times, is for mathematics to move beyond the current game of solving individual theorems to being able to make statements about broad categories of problems. So not 'X property is true for this specific magma' but 'X property is true for all possible magmas', as an example I just came up with. He has experimented with this via crowdsourcing problems in a given domain on GitHub before, and I think the implications of how to use AI here are obvious.

It took Andrew Wiles 7 years of intense work to solve Fermat's Last Theorem.

The METR institute predicts that the length of tasks AI agents can complete doubles every 7 months.

We should expect it to take until 2033 before AI solves Clay Institute-level problems with 50% reliability.

  • There is an ongoing effort to formalize a modern, streamlined proof of FLT in Lean, with all the needed prereqs. It's estimated that it will take approx. 5 years, but perhaps AI will lead to some meaningful speedup.

    • What I'm hoping to see is high volume automated formalization of the math literature, with the goal of formalizing (or finding flaws in) the entire thing.

      And once we have that formalized corpus, it's all set up as training data for moving forward.

      1 reply →

  • If you have a sufficiently strong verifier 1/100000 reliability is already enough

    • Sure, but then 50% reliability just becomes a matter of whether you can make a strong enough verifier.

Does it work on cryptography? Can it find out the methods behind the fourth Kryptos problem?

I really want to see if someone can prompt out a more elegant proof of Fermat's Last Theoremthan, compared to that of Wiles's proof.

Sounds to me the actual work was done in the discussions with ChatGPT by the researchers.

[flagged]

  • We need you to stop posting shallow dismissals and cynical, curmudgeonly, and snarky comments.

    We asked you about this just recently, but it's still most of what you're posting. You're making the site worse by doing this, right at the point where it's most vulnerable these days.

    Your comment here is a shallow dismissal of exactly the type the HN guidelines ask users to avoid here:

    "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something." (https://news.ycombinator.com/newsguidelines.html)

    Predictably, it led to by far the worst subthread on this article. That's not cool. I don't want to ban you because you're also occasionally posting good comments that don't fit these negative categories, but we need you to fix this and stop degrading the threads.

  • I think there is no person more qualified than Tao to tell what's interesting development in math and what's not.

  • Whether powered by human or computer, it is usually easier (and requires far fewer resources) to verify a specific proof than to search for a proof to a problem.

    • Professors elsewhere can verify the proof, but not how it was obtained. My assumption was that the focus here is on how "AI" obtains the proof and not on whether it is correct. There is no way to reproduce this experiment in an unbiased, non-corporate, academic setting.

      2 replies →

  • > ... Also, I would not put it past OpenAI to drag up a similar proof using ChatGPT, refine it and pretend that ChatGPT found it. ...

    That's the best part! They don't even need to, because ChatGPT will happily do its own private "literature search" and then not tell you about it - even Terence Tao has freely admitted as much in his previous comments on the topic. So we can at least afford to be a bit less curmudgeonly and cynical about that specific dynamic: we've literally seen it happen.

    • > ChatGPT will happily do its own private "literature search" and then not tell you about it

      Also known as model inference. This is not something "private" or secret [*]. AI models are lossily compressed data stores and will always will be. The model doesn't report on such "searches", because they are not actual searches driven by model output, but just the regular operation of the model driven by the inference engine used.

      > even Terence Tao has freely admitted as much

      Bit of a (willfully?) misleading way of saying they actively looked for it on a best effort basis, isn't it?

      [*] A valid point of criticism would be that the training data is kept private for the proprietary models Tao and co. using, so source finding becomes a goose chase with no definitive end to it.

      An I think valid counterpoint however is that if locating such literature content is so difficult for subject matter experts, then the model being able to "do so" in itself is a demonstration of value. Even if the model is not able to venture a backreference, by virtue of that not being an actual search.

      This is reflected in many other walks of life too. One of my long held ideas regarding UX for example is that features users are not able to find "do not exist".

This almost implies mathematicians aren’t some ungodly geniuses if something as absolutely dumb as an LLM can solve these problems via blind pattern matching.

Meanwhile I can’t get Claude code to fix its own shit to save my life.

  • As I understand, a lot of mathematics, at least the part about solving problems, is basically back and forth between exploration (which involves pattern matching) and formalising. We've basically solved formalising a while ago, and now LLMs are getting better and better at exploration.

    If you think about it, it's also what a lot of other intellectual activity looks like, at least in STEM.

  • There are "ungodly geniuses" within mathematics but no one is saying every mathematician is an "ungodly genius". The quality of results you get from an LLM can vary greatly depending on the environment you place it in and the context you provide it. This isn't to say it's your fault Claude Code can't fix whatever issue you're having.

  • > Meanwhile I can’t get Claude code to fix its own shit to save my life.

    Maybe this should give you some hint to that you're trying to use it in a different way than others?