← Back to context

Comment by queenkjuul

14 days ago

I think academic understanding of both LLMs and human consciousness are better than you think, and there's a vested interest (among AI companies) and collective hope (among AI devs and users) that this isn't the case

Why do you think they are better understood? I've seen the limits of our understanding in both these fields spoken of many times but I've never seen any suggestion that this is flawed. Could you point to resources which back up your claims?

This is utterly false.

1. Academic understanding of consciousness is effectively zero. If we understand something that means we can actually build or model the algorithm for consciousness. We can't because we don't know shit. Most of what you read is speculative hypotheticals derived from observation that's not too different from attempting to reverse engineer an operating system by staring at assembly code.

Often we describe consciousness with ill defined words that are also vague and lack understanding for. The whole endeavor is bs.

2. Understanding of LLMs outside of the low level token prediction is effectively zero. We know there are emergent second order effects that we don't get. You don't believe me? How about if I have the god father of AI say it himself:

https://youtu.be/qrvK_KuIeJk?t=284 Literally. The experts say we don't understand it.

Look if you knew how LLMs work you'd say the same. But people everywhere are coming to conclusions about LLMs without knowing everything, so by citing the eminent expert saying the ground truth you should be convinced that the reality is this conclusive fact:

You are utterly misinformed about how much academia understands about LLM and consciousness. We know MUCH less than you think.