← Back to context

Comment by frozenseven

12 hours ago

>not programmed to do logical inference in any capacity

Yet have no problem doing so when solving Erdős problems. This isn't up for debate at this point.

>The claims about solving Erdos problems have been wildly overstated

These are verified solutions. They exist, are not trivial, and are of obvious interest to the math community. Take it up with Terence Tao and co.

>pushed by people who have a very large financial stake in hyping up LLMs

Libel.

>It does not make them in any way intelligent

Word games.

Honestly big noobquestion: isn't math just very very nested patternmatching based on a few foundational operators? ive always felt, that im bad at math, cause i forget all the rules, but seeing solutions (and knowing the used pattern) always made "sense".

I always thought the hard math problems are so deeply nested or you have to remember trick xyz that people just didnt think about it yet..

  • The amount of mathematical structures and transformations you can apply (the possible rules) is effectively infinite. Simply remembering the rules might work at first, but you'll soon run into the combinatorial explosion: https://en.wikipedia.org/wiki/Combinatorial_explosion

    You could go a step further, and simply say "well, ok, then the LLMs are merely doing some form of incremental/heuristic search!". Yes, but at that point you'd also be hard-pressed to claim that humans themselves are doing anything beyond that. You run out of naturalistic explanations.

> This isn't up for debate at this point.

If by not up for debate, you mean that it is delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do, you would be correct. Probabilistic analysis can carry you very, very far in doing something that looks like logical inference at the surface level, but it is nonetheless not logical inference. LLM models have been getting increasingly good at factoring in larger and longer contexts and still managing to generate plausibly correct answers, becoming more and more useful all the while, but are still not capable of logical inference. This is why your genius mathematician AGI consciousness stumbles on trivial logic puzzles it has not seen before like the car wash meme.

  • >delusional and literally evidence of psychosis to suggest that computer software is doing something it is not programmed to do

    These are just insults and outright lies, and you know that. We're done here.

    AI progress from here on out will be extra sweet.