← Back to context

Comment by anal_reactor

3 days ago

> It is only surprising to those who refuse to understand how LLMs work and continue to anthropomorphise them. There is no being “truthful” here, the model has no concept of right or wrong, true or false. It’s not “lying” to you, it’s spitting out text. It just so happens that sometimes that non-deterministic text aligns with reality, but you don’t really know when and neither does the model.

My problem with this attitude is that it's surprisingly accurate for humans, especially mentally disabled ones. While I agree that something is "missing" about how LLMs display their intelligence, I think it's wrong to say that LLMs are "just spitting out text, they're not intelligent". To me, it is very clear that LLM models do display intelligence, even if said intelligence is a bit deficient, and even if it weren't, it wouldn't be exactly the type of intelligence we see in people.

My point is, the phrase "AI" has been thrown around pointlessly for a while already. Marketing people would sell a simple 100-line programs with a few branches as "AI", but all common people would say that this intelligence is indeed just a gimmick. But when ChatGPT got released, something flipped. Something feels different about talking to ChatGPT. Most people see that there is some intelligence in there, and it's just a few old men yelling at the clouds "It's not intelligence! It's just statistical token generation!" as though these two were mutually exclusive.

Finally, I'd like to point out you're not "alive". You're just a very complex chemical reaction/physical interaction. Your entire life can be explained using organic chemistry and a bit of basic physics. Yet for some reason, most people decide not to think of life in this way. They attribute complex personalities and emotionaly to living beings, even though it's mostly hormones and basic chemistry again. Why?