← Back to context

Comment by skeledrew

2 months ago

> Following patterns is what an hearsay machine does.

That's also how the brain works, at least partially. Primary differences are it takes and processes (trains itself on) raw sensory data instead of character tokens, and it continually does so for every conscious moment from at least birth until death.

> how the brain works, at least partially

With the difference, which have us go back to the original point, that the human mind has a crucial property of going beyond "pattern-based" intuition and check mental items lucidly and consciously.

> and it continually does so

It also persistently evaluates consciously and "store" and "learn" (which must be noted because it is the second main thing that LLMs don't do, after the problem of going past intuition).

  • > check mental items lucidly and consciously

    Capabilities that evolved over millennia. We don't even have a decent, universally-agreed upon definition for consciousness yet.

    > "store" and "learn"

    Actually there are tools for that. Again, the core LLM functionality is best left on its own, and augmented on the fly with various tools which can be easily specialized and upgraded independently of the model. Consider too that the brain itself has multiple sections dedicated to different kinds of processing, instead of anything just happening anywhere.

    • > Capabilities that evolved over millennia

      Means nothing. Now they are urgent.

      > consciousness

      I meant "conscious" as the wake opposed to deliriousness, as the ability "to be sure about what you have in front of you with the degree of accomplished clarity and substantially unproblematic boundaries of definition",

      not as that quality that intrigues (and obsesses) less pragmatic intellectuals in directions e.g. at the "Closer to Truth" channel.

      When I ask somebody, it has to be sure to a high degree. When implementing a mind, the property of "lucid conscious check" is fundamental.

      > tools for that

      The "consciously check, then store and learn" is structural in the proper human mental process - a high level functioning, not just a module; i.e. "it's what we do".

      Which means, the basic LLM architecture is missing important features that we need if we want to implement a developed interlocutor. And we need and want that.

      7 replies →