← Back to context

Comment by exegeist

5 days ago

Impressive prediction, especially pre-ChatGPT. Compare to Gary Marcus 3 months ago: https://garymarcus.substack.com/p/reports-of-llms-mastering-...

We may certainly hope Eliezer's other predictions don't prove so well-calibrated.

Gary Marcus is so systematically and overconfidently wrong that I wonder why we keep talking about this clown.

  • People just give attention to people making surprising bold counter narrative predictions but don't give them any attention when they're wrong.

  • People like him and Zitron do serve a useful purpose in balancing the hype from the other side, which, while justified to a great extent, is often a bit too overwhelming.

    • Being wrong in the other direction doesn't mean you've found a great balance, it just means you've found a new way to be wrong.

These numbers feel kind of meaningless without any work showing how he got to 16%

I do think Gary Marcus says a lot of wrong stuff about LLMs but I don’t see anything too egregious in that post. He’s just describing the results they got a few months ago.

  • He definitely cannot use the original arguments from then ChatGPT arrived, he's a perennial goal post shifter.