← Back to context

Comment by wiremine

6 days ago

How quickly we shift our expectations. If you told me 5 years ago we'd have technology that can do this, I wouldn't believe you.

This isn't to say we shouldn't think critically about the use and performance of models, but "Not Even Bronze..." turned me off to this critique.

What else should people do? If we just saturate at "wow this is amazing!" there's nothing to talk about, nothing to evaluate, nothing to push the boundaries forward further (or caution against doing so).

Yes, we're all impressed, but it's time to move on and start looking at where the frontier is and who's on it.

In 2024 AlphaProof got Silver level, so people righteously expect a lot now.

(It's specifically trained on formalized math problems, unlike most LLM, so it's not an apple to apple comparison.)

LLMs are really good with words and kind of crap at “thinking.” Humans are wired to see these two things as tightly connected. A machine that thinks poorly and talks great is inherently confusing. A lot of discussion and disputes around LLMs comes down to this.

It wasn’t that long ago that the Turing Test was seen as the gold standard of whether a machine was actually intelligent. LLMs blew past that benchmark a year or two ago and people barely noticed. This might be moving the goalposts, but I see it as a realization that thought and language are less inherently connected than we thought.

So yeah, the fact that they even do this well is pretty amazing, but they sound like they should be doing so much better.

  • > LLMs are really good with words and kind of crap at “thinking.” Humans are wired to see these two things as tightly connected. A machine that thinks poorly and talks great is inherently confusing. A lot of discussion and disputes around LLMs comes down to this.

    It's not an unfamiliar phenomenon in humans. Look at Malcolm Gladwell.