← Back to context

Comment by killerstorm

2 hours ago

Why do you assume I'm naive?

I knew how LLMs work since 2019 and I've been testing their capabilities. I believe they actually are smart in every meaningful way.

"Next word prediction" just means that answer is generated through computation. I don't think computation can't be smart.

If you believe that LLMs are probabilitic and humans aren't, how do you explain randomness in human behavior? E.g. people making random typos. Have you ever tried to analyze your own behavior, understand how you function? Or do you just inherently believe you're smarter than any computation?