← Back to context

Comment by Al-Khwarizmi

1 day ago

Indeed. To me, it has long been clear that LLMs do things that, at the very least, are indistinguishable from reasoning. The already classic examples where you make them do world modeling (I put an ice cube into a cup, put the cup in a black box, take it into the kitchen, etc... where is the ice cube now?) invalidate the stochastic parrot argument.

But many people in the humanities have read the stochastic parrot argument, it fits their idea of how they prefer things to be, so they take it as true without questioning much.

My favorite example: 'can <x> cut through <y>?'

You can put just about anything in there for x and y, and it will almost always get it right. Can a pair of scissors cut through a boeing 747? Can a carrot cut through loose snow? A chainsaw cut through a palm leaf? Nailclippers through a rubber tire?

Because of combinatorics, the space of ways objects can interact is too big to memorize, so it can only answer if it has learned something real about materials and their properties.