Comment by jrmg

2 months ago

LLMs are the Chinese Room. They would generate identical output for the same input text every time were it not for artificially introduced randomness (‘heat’).

Of course, some would argue the Chinese Room is conscious.

If you somehow managed to perfectly simulate a human being, they would also act deterministically in response to identical initial conditions (modulo quantum effects, which are insignificant at the neural scale and also apply just as well to transistors).

  • > in response to identical initial conditions

    precisely, mathematically identical to infinite precision .. "yes".

    Meanwhile, in the real world we live in it's essentially physically impossible to stage two seperate systems to be identical to such a degree AND it's an important result that some systems, some very simple systems, will have quite different outcomes without that precise degree of impossibly infinitely detailed identical conditions.

    See: Lorenz's Butterfly and Smale's Horseshoe Map.

    • Of course. But that's not relevant to the point I was responding to suggesting that LLMs may lack consciousness because they're deterministic. Chaos wasn't the argument (though that would be a much more interesting one, cf "edge of chaos" literature).

      1 reply →

  • Doesn’t everything act deterministically if all the forces are understood? Humans included.

    One can say the notion of free will is an unpacked bundle of near infinite forces emerging in and passing through us.

I am arguing (or rather, presenting without argument) that the Chinese room may be conscious, hence calling it a fallacy above. Not that it _is_ conscious, to be clear, but that the Chinese room has done nothing to show that it is not. Hofstadter makes the argument well in GEB and other places.