Comment by zwnow
6 hours ago
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
6 hours ago
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
This common response is pretty uninteresting and misleading. They simply predict tokens? Oh. What does the brain do, exactly?
We don't how
It does exactly the same, predicts tokens, but it's totally different and superior to LLMs /s
OTOH, brain tokens seem to be concept based and not always linguistic (many people think solely in images/concepts).
LLMs are “concept based” too, if you can call statistical patterns that. In a multi-modal model the embeddings for text, image and audio exist in the same high-dimensional space.
We don’t seem to have any clue if this is how our brain works, yet.
The brain has intrinsic understanding of the world engraved in our DNA. We do not simply predict tokens based on knowledge, we base our thoughts on intelligence, emotions and knowledge. LLMs neither have intelligence nor emotions. If your brain simply predicts tokens I feel sorry for you.
Edit: really does not surprise me that AI bros downvote this. Expecting to understand human values from people that want to make themselves obsolete was a mistake.
> The brain has intrinsic understanding of the world engraved in our DNA.
This is not correct. The DNA encodes learning mechanisms shaped by evolution. But there is no "Wikipedia" about the world in the DNA. The DNA is shaped by the process of evolution, and is not "filled" by seemingly random information.
2 replies →
I'm not an AI bro and I downvoted mostly because of the addendum.