Comment by Legend2440

1 day ago

What an unnecessarily wordy article. It could have been a fifth of the length. The actual point is buried under pages and pages of fluff and hyperbole.

I would just suggest that if you want your comment to be more helpful than the article that you're critiquing, you might want to actually quote the part which you believe is "The actual point".

Otherwise you are likely to have people agreeing with you, while they actually had a very different point that they took away.

Yes, and I agree and it seems like the author has a naïve experience with LLMs because what he’s talking about is kind of the bread and butter as far as I’m concerned

  • Indeed. To me, it has long been clear that LLMs do things that, at the very least, are indistinguishable from reasoning. The already classic examples where you make them do world modeling (I put an ice cube into a cup, put the cup in a black box, take it into the kitchen, etc... where is the ice cube now?) invalidate the stochastic parrot argument.

    But many people in the humanities have read the stochastic parrot argument, it fits their idea of how they prefer things to be, so they take it as true without questioning much.

    • My favorite example: 'can <x> cut through <y>?'

      You can put just about anything in there for x and y, and it will almost always get it right. Can a pair of scissors cut through a boeing 747? Can a carrot cut through loose snow? A chainsaw cut through a palm leaf? Nailclippers through a rubber tire?

      Because of combinatorics, the space of ways objects can interact is too big to memorize, so it can only answer if it has learned something real about materials and their properties.