← Back to context

Comment by CamperBob2

1 month ago

How does one go about competing at the IMO without "intelligence", exactly? At a minimum it seems we are forced to admit that the machines are smarter than the test authors.

"LLM" as a marketing term seems rational. "Machine learning" also does. We can describe the technology honestly without using a science fiction lexicon. Just because a calculator can do math faster than Isaac Newton doesn't mean it's intelligent. I wouldn't expect it to invent a new way of doing math like Isaac Newton, at least.

  • Just because a calculator can do math faster than Isaac Newton doesn't mean it's intelligent.

    Exactly, and that's the whole point. If you lack genuine mathematical reasoning skill, a calculator won't help you at the IMO. You might as well bring a house plant or a teddy bear.

    But if you bring a GPT5-class LLM, you can walk away with a gold medal without having any idea what you're doing.

    Consequently, analogies involving calculators are not valid. The burden of proof rests firmly on the shoulders of those who claim that an LLM couldn't invent new mathematical techniques in response to a problem that requires it.

    In fact, that appears to have just happened (https://news.ycombinator.com/item?id=46664631), where an out-of-distribution proof for an older problem was found. (Meta: also note the vehement arguments in that thread regarding whether or not someone is using an LLM to post comments. That doesn't happen without intelligence, either.)

    • That doesn't appear to be what happened. But the marketing sure has a lot of people working quick to presume so.

      I would guess it's only a matter of days before that proof, or one very similar, is found in the training data, if that hasn't happened already, just as has been the case every time.

      No fundamental change in how the LLM functions has been made that would lead us to expect otherwise.

      Similar "discoveries" occurred all the time with the dawn of the internet connecting the dots on a lot of existing knowledge. Many people found that someone had already solved many problems they were working on. We used to be able to search the web, if you can believe that.

      The LLMs are bringing that back in a different way. It's functional internet search with an uncanny language model, that sadly obfuscates the underlying data while making guesswork to summarize it (which makes it harder to tell which of its findings are valuable, and which are not).

      It's useful for some things, but that's not remotely what intelligence is. It doesn't literally understand.

      >* if you bring a GPT5-class LLM, you can walk away with a gold medal without having any idea what you're doing.*

      My money won't be betting on your GPT5-class business advice unless you have a really good idea what you're doing.

      It requires some (a lot of) intelligence and experience to usefully operate an LLM in virtually every real world scenario. Think about what that implies. (It implies that it's not by itself intelligent.)

      8 replies →