Comment by falcor84

15 days ago

> Because a claim is just a generated clump of tokens.

And what do you think a claim by a human is? As I see it, you're either a materialist and then a claim is what we call some organization of physical material in some mediums, e.g. ink on paper or vibrations in the air or current flowing through transistors or neurotransmitters in synapses, which retains some of its "pattern" when moved across mediums. Or you're a dualist and believe in some version of an "idea space", in which case, I don't see how you can make a strong distinction between an idea/claim that is being processed by a human and an idea being processed by the weights of an LLM.

Materialism is a necessary condition to claim that the ideas LLMs produce are identical to the ones humans produce, but it isn’t a sufficient condition. Your assertion does nothing to demonstrate that LLM output and human output is identical in practice.

  • > demonstrate that LLM output and human output is identical in practice.

    What do you mean? If a human and an LLM output the same words, what remains to be demonstrated? Do you claim that the output somehow contains within itself the idea that generated it, and thus a piece of machinery that did not really perceive the idea can generate a "philosophical zombie output" that has the same words, but does not contain the same meaning?

    Is this in the same sense that some argue that an artifact being a work of art is dependent on the intent behind its creation? Such that if Jackson Pollock intentionally randomly drips paint over a canvas, it's art, but if he were to accidentally kick the cans while walking across the room and create similar splotches, then it's not art?

Yeah, kind of my issue with LLM dismissers as well. Sure, (statistically) generated clump of tokens. What is a human mind doing instead?

I'm on board with calling out differences between how LLMs work and how the human mind work, but I'm not hearing anything about the latter. Mostly it comes down to, "Come on, you know, like we think!"

I have no idea how it is I (we) think.

If anything LLM's uncanny ability to seem human might be shedding light in fact on how it is we do function — at least when in casual conversation. (Someone ought to look into that.)

  • One massive difference between the two is that a human mind is still in "training" mode, it is affected by and changes according to the conversations it has. An LLM does not. Another major difference is that the human exists in real time and continues thinking, sensing, and being even while it is not speaking, while an LLM does not.

    If you ascribe to the idea (as I do) that consciousness is not a binary quality that a thing possesses or does not, you can assign some tiny amount of consciousness to this process. But you can do the same for a paramecium. Until those two major differences are addressed, I believe we're talking about consciousness on that scale, not to be confused with human consciousness.